Sample records for hpc high performance

  1. Performance of high performance concrete (HPC) in low pH and sulfate environment : [technical summary].

    DOT National Transportation Integrated Search

    2013-01-01

    High-performance concrete (HPC) refers to any concrete formulation with enhanced characteristics, compared to normal concrete. One might think this refers to strength, but in Florida, the HPC standard emphasizes withstanding aggressive environments, ...

  2. High Performance Concrete in Washington State SR 18/SR 516 Overcrossing: Interim Report on Girder Monitoring

    DOT National Transportation Integrated Search

    2000-04-01

    In the mid 1990s the Federal Highway Administration (FHWA) established a High Performance Concrete (HPC) program aimed at demonstrating the positive effects of utilizing HPC in bridges. Research on the benefits of using HPC for bridges has shown a nu...

  3. High performance concrete in Washington state SR 18/SR 516 overcrossing : interim report on materials tests

    DOT National Transportation Integrated Search

    2000-04-01

    In the mid 1990s the Federal Highway Administration (FHWA) established a High Performance Concrete (HPC) program aimed at demonstrating the positive effects of utilizing HPC in bridges. Research on the benefits of using HPC for bridges has shown a nu...

  4. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  5. Training | High-Performance Computing | NREL

    Science.gov Websites

    Training Training Find training resources for using NREL's high-performance computing (HPC) systems as well as related online tutorials. Upcoming Training HPC User Workshop - June 12th We will be Conference, a group meets to discuss Best Practices in HPC Training. This group developed a list of resources

  6. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  7. WinHPC System User Basics | High-Performance Computing | NREL

    Science.gov Websites

    guidance for starting to use this high-performance computing (HPC) system at NREL. Also see WinHPC policies ) when you are finished. Simply quitting Remote Desktop will keep your session active and using resources node). 2. Log in with your NREL.gov username/password. Remember to log out when finished. Mac 1. If you

  8. One-Time Password Tokens | High-Performance Computing | NREL

    Science.gov Websites

    One-Time Password Tokens One-Time Password Tokens For connecting to NREL's high-performance computing (HPC) systems, learn how to set up a one-time password (OTP) token for remote and privileged a one-time pass code from the HPC Operations team. At the sign-in screen Enter your HPC Username in

  9. Self-desiccation mechanism of high-performance concrete.

    PubMed

    Yang, Quan-Bing; Zhang, Shu-Qing

    2004-12-01

    Investigations on the effects of W/C ratio and silica fume on the autogenous shrinkage and internal relative humidity of high performance concrete (HPC), and analysis of the self-desiccation mechanisms of HPC showed that the autogenous shrinkage and internal relative humidity of HPC increases and decreases with the reduction of W/C respectively; and that these phenomena were amplified by the addition of silica fume. Theoretical analyses indicated that the reduction of RH in HPC was not due to shortage of water, but due to the fact that the evaporable water in HPC was not evaporated freely. The reduction of internal relative humidity or the so-called self-desiccation of HPC was chiefly caused by the increase in mole concentration of soluble ions in HPC and the reduction of pore size or the increase in the fraction of micro-pore water in the total evaporable water (T(r)/T(te) ratio).

  10. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    NASA Astrophysics Data System (ADS)

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-12-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.

  11. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  12. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    NASA Astrophysics Data System (ADS)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  13. Biomass Waste Inspired Highly Porous Carbon for High Performance Lithium/Sulfur Batteries

    PubMed Central

    Zhao, Yan; Ren, Jun; Tan, Taizhe; Babaa, Moulay-Rachid; Bakenov, Zhumabay; Liu, Ning; Zhang, Yongguang

    2017-01-01

    The synthesis of highly porous carbon (HPC) materials from poplar catkin by KOH chemical activation and hydrothermal carbonization as a conductive additive to a lithium-sulfur cathode is reported. Elemental sulfur was composited with as-prepared HPC through a melt diffusion method to form a S/HPC nanocomposite. Structure and morphology characterization revealed a hierarchically sponge-like structure of HPC with high pore volume (0.62 cm3∙g−1) and large specific surface area (1261.7 m2∙g−1). When tested in Li/S batteries, the resulting compound demonstrated excellent cycling stability, delivering a second-specific capacity of 1154 mAh∙g−1 as well as presenting 74% retention of value after 100 cycles at 0.1 C. Therefore, the porous structure of HPC plays an important role in enhancing electrochemical properties, which provides conditions for effective charge transfer and effective trapping of soluble polysulfide intermediates, and remarkably improves the electrochemical performance of S/HPC composite cathodes. PMID:28878149

  14. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  15. Investigation into shrinkage of high-performance concrete used for Iowa bridge decks and overlays.

    DOT National Transportation Integrated Search

    2013-09-01

    High-performance concrete (HPC) overlays have been used increasingly as an effective and economical method for bridge decks in Iowa and other states. However, due to its high cementitious material content, HPC often displays high shrinkage cracking p...

  16. Design and performance of crack-free environmentally friendly concrete "crack-free eco-crete".

    DOT National Transportation Integrated Search

    2014-08-01

    High-performance concrete (HPC) is characterized by high content of cement and supplementary cementitious materials (SCMs). : Using high binder content, low water-to-cementitious material ratio (w/cm), and various chemical admixtures in the HPC can r...

  17. WinHPC System | High-Performance Computing | NREL

    Science.gov Websites

    System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB

  18. Performance of high performance concrete (HPC) in low pH and sulfate environment.

    DOT National Transportation Integrated Search

    2013-05-01

    The goal of this research is to determine the impact of low pH and sulfate environment on high-performance concrete (HPC) and if the current structural and materials specifications provide adequate protections for concrete structures to meet the 75-y...

  19. Test plan : Branson TRIP travel time/data accuracy

    DOT National Transportation Integrated Search

    2000-04-01

    In the mid 1990's the FHWA established a High Performance Concrete (HPC) program aimed at demonstrating the positive effects of utilizing HPC in bridges. Research on the benefits of using high performance concrete for bridges has shown a number of be...

  20. Running ANSYS Fluent on the WinHPC System | High-Performance Computing |

    Science.gov Websites

    . If you don't have one, see WinHPC system user basics. Check License Use Status Start > All Jason Lustbader. Run Using Fluent Launcher Start Fluent launcher by opening: Start > All Programs > . Available node groups can be found from HPC Job Manager. Start > All Programs > Microsoft HPC Pack

  1. Economical and crack-free high-performance concrete for pavement and transportation infrastructure construction.

    DOT National Transportation Integrated Search

    2017-05-01

    The main objective of this research is to develop and validate the behavior of a new class of environmentally friendly and costeffective : high-performance concrete (HPC) referred to herein as Eco-HPC. The proposed project aimed at developing two cla...

  2. Self curing admixture performance report.

    DOT National Transportation Integrated Search

    2012-02-01

    The Oregon Department of Transportation (ODOT) has experienced early age cracking of newly placed high performance : concrete (HPC) bridge decks. The silica fume contained in the HPC requires immediate and proper curing application after : placement ...

  3. birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.

    PubMed

    Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir

    2011-05-01

    birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.

  4. Freestanding hierarchically porous carbon framework decorated by polyaniline as binder-free electrodes for high performance supercapacitors

    NASA Astrophysics Data System (ADS)

    Miao, Fujun; Shao, Changlu; Li, Xinghua; Wang, Kexin; Lu, Na; Liu, Yichun

    2016-10-01

    Freestanding hierarchically porous carbon electrode materials with favorable features of large surface areas, hierarchical porosity and continuous conducting pathways are very attractive for practical applications in electrochemical devices. Herein, three-dimensional freestanding hierarchically porous carbon (HPC) materials have been fabricated successfully mainly by the facile phase separation method. In order to further improve the energy storage ability, polyaniline (PANI) with high pseudocapacitance has been decorated on HPC through in situ chemical polymerization of aniline monomers. Benefiting from the synergistic effects between HPC and PANI, the resulting HPC/PANI composites as electrode materials present dramatic electrochemical performance with high specific capacitance up to 290 F g-1 at 0.5 A g-1 and good rate capability with ∼86% (248 F g-1) capacitance retention at 64 A g-1 of initial capacitance in three-electrode configuration. Moreover, the as-assembled symmetric supercapacitor based on HPC/PANI composites also demonstrates good capacitive properties with high energy density of 9.6 Wh kg-1 at 223 W kg-1 and long-term cycling stability with 78% capacitance retention after 10 000 cycles. Therefore, this work provides a new approach for designing high-performance electrodes with exceptional electrochemical performance, which are very promising for practical application in the energy storage field.

  5. Business Models of High Performance Computing Centres in Higher Education in Europe

    ERIC Educational Resources Information Center

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  6. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  7. Using Performance Tools to Support Experiments in HPC Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian

    2014-01-01

    The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less

  8. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  9. HPC: Rent or Buy

    ERIC Educational Resources Information Center

    Fredette, Michelle

    2012-01-01

    "Rent or buy?" is a question people ask about everything from housing to textbooks. It is also a question universities must consider when it comes to high-performance computing (HPC). With the advent of Amazon's Elastic Compute Cloud (EC2), Microsoft Windows HPC Server, Rackspace's OpenStack, and other cloud-based services, researchers now have…

  10. Performance testing of HPC on Sunshine Bridge.

    DOT National Transportation Integrated Search

    2009-09-01

    The deck of the Sunshine Bridge overpass, located westbound on Interstate 40 (I-40) near Winslow, Arizona, was : replaced on August 24, 2005. The original deteriorated concrete deck was replaced using high performance : concrete (HPC), reinforced wit...

  11. GraphMeta: Managing HPC Rich Metadata in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less

  12. Hierarchically Porous Carbon Materials for CO 2 Capture: The Role of Pore Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estevez, Luis; Barpaga, Dushyant; Zheng, Jian

    2018-01-17

    With advances in porous carbon synthesis techniques, hierarchically porous carbon (HPC) materials are being utilized as relatively new porous carbon sorbents for CO2 capture applications. These HPC materials were used as a platform to prepare samples with differing textural properties and morphologies to elucidate structure-property relationships. It was found that high microporous content, rather than overall surface area was of primary importance for predicting good CO2 capture performance. Two HPC materials were analyzed, each with near identical high surface area (~2700 m2/g) and colossally high pore volume (~10 cm3/g), but with different microporous content and pore size distributions, which ledmore » to dramatically different CO2 capture performance. Overall, large pore volumes obtained from distinct mesopores were found to significantly impact adsorption performance. From these results, an optimized HPC material was synthesized that achieved a high CO2 capacity of ~3.7 mmol/g at 25°C and 1 bar.« less

  13. Development and construction of low-cracking high-performance concrete (LC-HPC) bridge decks : free shrinkage, moisture optimization and concrete production : final report.

    DOT National Transportation Integrated Search

    2009-08-01

    The development and evaluation of low-cracking high-performance concrete (LC-HPC) for use in bridge decks : is described based on laboratory test results and experience gained during the construction of 14 bridges. This report : emphasizes the materi...

  14. User Account Passwords | High-Performance Computing | NREL

    Science.gov Websites

    Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set

  15. Data Security Policy | High-Performance Computing | NREL

    Science.gov Websites

    to use its high-performance computing (HPC) systems. NREL HPC systems are operated as research systems and may only contain data related to scientific research. These systems are categorized as low per sensitive or non-sensitive. One example of sensitive data would be personally identifiable information (PII

  16. Development and construction of low-cracking high-performance concrete (LC-HPC) bridge decks : free shrinkage, moisture optimization and concrete production : summary report.

    DOT National Transportation Integrated Search

    2009-08-01

    The development and evaluation of low-cracking high-performance concrete (LC-HPC) for use in bridge decks : is described based on laboratory test results and experience gained during the construction of 14 bridges. This report : emphasizes the materi...

  17. Shared Storage Usage Policy | High-Performance Computing | NREL

    Science.gov Websites

    Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project

  18. High-performance computing with quantum processing units

    DOE PAGES

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...

    2017-03-01

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  19. High-performance computing with quantum processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  20. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  1. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  2. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  3. Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving

    NASA Astrophysics Data System (ADS)

    Demenev, A. G.

    2018-02-01

    The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.

  4. Prediction and characterization of application power use in a high-performance computing environment

    DOE PAGES

    Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...

    2017-02-27

    Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

  5. WinHPC System Software | High-Performance Computing | NREL

    Science.gov Websites

    Software WinHPC System Software Learn about the software applications, tools, toolchains, and for industrial applications. Intel Compilers Development Tool, Toolchain Suite featuring an industry

  6. RAPPORT: running scientific high-performance computing applications on the cloud.

    PubMed

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  7. Modular HPC I/O characterization with Darshan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Shane; Carns, Philip; Harms, Kevin

    2016-11-13

    Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientificmore » applications and computing platforms calls for greater flexibililty and scope in I/O characterization.« less

  8. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, Earl C.; Conway, Steve; Dekate, Chirag

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. Amore » new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.« less

  9. Roy Fraley | NREL

    Science.gov Websites

    Roy Fraley Roy Fraley Professional II-Engineer Roy.Fraley@nrel.gov | 303-384-6468 Roy Fraley is the high-performance computing (HPC) data center engineer with the Computational Science Center's HPC

  10. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  11. User Accounts | High-Performance Computing | NREL

    Science.gov Websites

    see information on user account policies. ACCOUNT PASSWORDS Logging in for the first time? Forgot your Accounts User Accounts Learn how to request an NREL HPC user account. Request an HPC Account To request an HPC account, please complete our request form. This form is provided using DocuSign. REQUEST

  12. Chip-scale integrated optical interconnects: a key enabler for future high-performance computing

    NASA Astrophysics Data System (ADS)

    Haney, Michael; Nair, Rohit; Gu, Tian

    2012-01-01

    High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide a seamless interconnect fabric spanning the intra-

  13. Quantifying Scheduling Challenges for Exascale System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R

    2015-01-01

    The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solvingmore » scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.« less

  14. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  15. Mixing HTC and HPC Workloads with HTCondor and Slurm

    NASA Astrophysics Data System (ADS)

    Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2017-10-01

    Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.

  16. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  17. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE PAGES

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...

    2017-02-11

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  18. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  19. Experimental investigation on high performance RC column with manufactured sand and silica fume

    NASA Astrophysics Data System (ADS)

    Shanmuga Priya, T.

    2017-11-01

    In recent years, the use High Performance Concrete (HPC) has increased in construction industry. The ingredients of HPC depend on the availability and characteristics of suitable alternative materials. Those alternative materials are silica fume and manufactured sand, a by products from ferro silicon and quarry industries respectively. HPC made with silica fume as partial replacement of cement and manufactured sand as replacement of natural sand is considered as sustainable high performance concrete. In this present study the concrete was designed to get target strength of 60 MPa as per guide lines given by ACI 211- 4R (2008). The laboratory study was carried out experimentally to analyse the axial behavior of reinforced cement HPC column of size 100×100×1000mm and square in cross section. 10% of silica fume was preferred over ordinary portland cement. The natural sand was replaced by 0, 20, 40, 60, 80 and 100% with Manufactured Sand (M-Sand). In this investigation, totally 6 column specimens were cast for mixes M1 to M6 and were tested in 1000kN loading frame at 28 days. From this, Load-Mid height deflection curves were drawn and compared. Maximum ultimate load carrying capacity and the least deflection is obtained for the mix prepared by partial replacement of cement with 10% silica fume & natural sand by 100% M-Sand. The fine, amorphous and pozzalonic nature of silica fume and fine mineral particles in M- Sand increased the stiffness of HPC column. The test results revealed that HPC can be produced by using M-Sand with silica fume.

  20. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    DTIC Science & Technology

    2009-04-01

    threatened by global competition. It is essential that these suppliers remain competitive and maintain their technological advantage . In this increasingly...place themselves, as well as customers who rely on them, in competitive jeopardy. Despite the potential competitive advantage associated with adopting...computing users into the HPC fold and to enable more entry-level users to exploit HPC more fully for competitive advantage . About half of the surveyed

  1. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  2. Connecting to HPC Systems | High-Performance Computing | NREL

    Science.gov Websites

    one of the following methods, which use multi-factor authentication. First, you will need to set up If you just need access to a command line on an HPC system, use one of the following methods

  3. Direct SSH Gateway Access to Peregrine | High Performance Computing |

    Science.gov Websites

    can access peregrine-ssh.nrel.gov, you must have: An active NREL HPC user account (see User Accounts ) An OTP Token (see One Time Password Tokens) Logging into peregrine-ssh.nrel.gov With your HPC account

  4. Trends in data locality abstractions for HPC systems

    DOE PAGES

    Unat, Didem; Dubey, Anshu; Hoefler, Torsten; ...

    2017-05-10

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity andmore » performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. Furthermore, this paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.« less

  5. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  6. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  7. Long-term monitoring of the HPC Charenton Canal Bridge.

    DOT National Transportation Integrated Search

    2011-08-01

    The report contains long-term monitoring data collection and analysis of the first fully high : performance concrete (HPC) bridge in Louisiana, the Charenton Canal Bridge. The design of this : bridge started in 1997, and it was built and opened to tr...

  8. Kevin Regimbal | NREL

    Science.gov Websites

    -275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center

  9. Combined Performance of Polypropylene Fibre and Weld Slag in High Performance Concrete

    NASA Astrophysics Data System (ADS)

    Ananthi, A.; Karthikeyan, J.

    2017-12-01

    The effect of polypropylene fibre and weld slag on the mechanical properties of High Performance Concrete (HPC) containing silica fume as the mineral admixtures was experimentally verified in this study. Sixteen series of HPC mixtures(70 MPa) were designed with varying fibre fractions and Weld Slag (WS). Fibre added at different proportion (0, 0.1, 0.3 and 0.6%) to the weight of cement. Weld slag was substituted to the fine aggregate (0, 10, 20 and 30%) at volume. The addition of fibre decreases the slump at 5, 9 and 14%, whereas the substitution of weld slag decreases by about 3, 11 and 21% with respect to the control mixture. Mechanical properties like compressive strength, split tensile strength, flexural strength, Ultrasonic Pulse Velocity test (UPV) and bond strength were tested. Durability studies such as Water absorption and Sorptivity test were conducted to check the absorption of water in HPC. Weld slag of 10% and fibre dosage of 0.3% in HPC, attains the maximum strength and hence this combination is most favourable for the structural applications.

  10. Enabling parallel simulation of large-scale HPC network systems

    DOE PAGES

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...

    2016-04-07

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  11. Enabling parallel simulation of large-scale HPC network systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  12. What Physicists Should Know About High Performance Computing - Circa 2002

    NASA Astrophysics Data System (ADS)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  13. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  14. Smart Sampling and HPC-based Probabilistic Look-ahead Contingency Analysis Implementation and its Evaluation with Real-world Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Etingov, Pavel V.; Ren, Huiying

    This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less

  15. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  16. High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation

    DTIC Science & Technology

    2016-11-01

    Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Kathryn Esham, Luis Bravo, Anindya Ghoshal, Muthuvel Murugan, and Michael...Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Luis Bravo, Anindya Ghoshal, Muthuvel...High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation 5a

  17. High Productivity Computing Systems and Competitiveness Initiative

    DTIC Science & Technology

    2007-07-01

    planning committee for the annual, international Supercomputing Conference in 2004 and 2005. This is the leading HPC industry conference in the world. It...sector partnerships. Partnerships will form a key part of discussions at the 2nd High Performance Computing Users Conference, planned for July 13, 2005...other things an interagency roadmap for high-end computing core technologies and an accessibility improvement plan . Improving HPC Education and

  18. Effect of rice husk ash and fly ash on the compressive strength of high performance concrete

    NASA Astrophysics Data System (ADS)

    Van Lam, Tang; Bulgakov, Boris; Aleksandrova, Olga; Larsen, Oksana; Anh, Pham Ngoc

    2018-03-01

    The usage of industrial and agricultural wastes for building materials production plays an important role to improve the environment and economy by preserving nature materials and land resources, reducing land, water and air pollution as well as organizing and storing waste costs. This study mainly focuses on mathematical modeling dependence of the compressive strength of high performance concrete (HPC) at the ages of 3, 7 and 28 days on the amount of rice husk ash (RHA) and fly ash (FA), which are added to the concrete mixtures by using the Central composite rotatable design. The result of this study provides the second-order regression equation of objective function, the images of the surface expression and the corresponding contours of the objective function of the regression equation, as the optimal points of HPC compressive strength. These objective functions, which are the compressive strength values of HPC at the ages of 3, 7 and 28 days, depend on two input variables as: x1 (amount of RHA) and x2 (amount of FA). The Maple 13 program, solving the second-order regression equation, determines the optimum composition of the concrete mixture for obtaining high performance concrete and calculates the maximum value of the HPC compressive strength at the ages of 28 days. The results containMaxR28HPC = 76.716 MPa when RHA = 0.1251 and FA = 0.3119 by mass of Portland cement.

  19. User-level framework for performance monitoring of HPC applications

    NASA Astrophysics Data System (ADS)

    Hristova, R.; Goranov, G.

    2013-10-01

    HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.

  20. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less

  1. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  2. Hierarchically porous carbon/polyaniline hybrid for use in supercapacitors.

    PubMed

    Joo, Min Jae; Yun, Young Soo; Jin, Hyoung-Joon

    2014-12-01

    A hierarchically porous carbon (HPC)/polyaniline (PANI) hybrid electrode was prepared by the polymerization of PANI on the surface of the HPC via rapid-mixing polymerization. The surface morphologies and chemical composition of the HPC/PANI hybrid electrode were characterized using transmission electron microscopy and X-ray photoelectron spectroscopy (XPS), respectively. The surface morphologies and XPS results for the HPC, PANI and HPC/PANI hybrids indicate that PANI is coated on the surface of HPC in the HPC/PANI hybrids which have two different nitrogen groups as a benzenoid amine (-NH-) peak and positively charged nitrogen (N+) peak. The electrochemical performances of the HPC/PANI hybrids were analyzed by performing cyclic voltammetry and galvanostatic charge-discharge tests. The HPC/PANI hybrids showed a better specific capacitance (222 F/g) than HPC (111 F/g) because of effect of pseudocapacitor behavior. In addition, good cycle stabilities were maintained over 1000 cycles.

  3. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov Websites

    requiring high CPU utilization or large amounts of memory should be run on the worker nodes. WinHPC02 is not associated data are removed when NREL worker status is discontinued. Users should make arrangements to save other users. Licenses are returned to the license pool when other users close the application or after

  4. Humic acids-based hierarchical porous carbons as high-rate performance electrodes for symmetric supercapacitors.

    PubMed

    Qiao, Zhi-jun; Chen, Ming-ming; Wang, Cheng-yang; Yuan, Yun-cai

    2014-07-01

    Two kinds of hierarchical porous carbons (HPCs) with specific surface areas of 2000 m(2)g(-1) were synthesized using leonardite humic acids (LHA) or biotechnology humic acids (BHA) precursors via a KOH activation process. Humic acids have a high content of oxygen-containing groups which enabled them to dissolve in aqueous KOH and facilitated the homogeneous KOH activation. The LHA-based HPC is made up of abundant micro-, meso-, and macropores and in 6M KOH it has a specific capacitance of 178 F g(-1) at 100 Ag(-1) and its capacitance retention on going from 0.05 to 100 A g(-1) is 64%. In contrast, the BHA-based HPC exhibits a lower capacitance retention of 54% and a specific capacitance of 157 F g(-1) at 100 A g(-1) which is due to the excessive micropores in the BHA-HPC. Moreover, LHA-HPC is produced in a higher yield than BHA-HPC (51 vs. 17 wt%). Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Contact Us | High-Performance Computing | NREL

    Science.gov Websites

    Select Peregrine Merlin WinHPC Allocation project handle (if requesting HPC account) Description of "SEND REQUEST" and nothing happens, it most likely means you forgot to provide information in a required field. You may need to scroll up to see what required information is missing

  6. WinHPC System Programming | High-Performance Computing | NREL

    Science.gov Websites

    Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running

  7. Platform for Automated Real-Time High Performance Analytics on Medical Image Data.

    PubMed

    Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A

    2018-03-01

    Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.

  8. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  9. Continuous whole-system monitoring toward rapid understanding of production HPC applications and systems

    DOE PAGES

    Agelastos, Anthony; Allan, Benjamin; Brandt, Jim; ...

    2016-05-18

    A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less

  10. Hierarchically porous carbon with manganese oxides as highly efficient electrode for asymmetric supercapacitors.

    PubMed

    Chou, Tsu-Chin; Doong, Ruey-An; Hu, Chi-Chang; Zhang, Bingsen; Su, Dang Sheng

    2014-03-01

    A promising energy storage material, MnO2 /hierarchically porous carbon (HPC) nanocomposites, with exceptional electrochemical performance and ultrahigh energy density was developed for asymmetric supercapacitor applications. The microstructures of MnO2 /HPC nanocomposites were characterized by transmission electron microscopy, scanning transmission electron microscopy, and electron dispersive X-ray elemental mapping analysis. The 3-5 nm MnO2 nanocrystals at mass loadings of 7.3-10.8 wt % are homogeneously distributed onto the HPCs, and the utilization efficiency of MnO2 on specific capacitance can be enhanced to 94-96 %. By combining the ultrahigh utilization efficiency of MnO2 and the conductive and ion-transport advantages of HPCs, MnO2 /HPC electrodes can achieve higher specific capacitance values (196 F g(-1) ) than those of pure carbon electrodes (60.8 F g(-1) ), and maintain their superior rate capability in neutral electrolyte solutions. The asymmetric supercapacitor consisting of a MnO2 /HPC cathode and a HPC anode shows an excellent performance with energy and power densities of 15.3 Wh kg(-1) and 19.8 kW kg(-1) , respectively, at a cell voltage of 2 V. Results obtained herein demonstrate the excellence of MnO2 /HPC nanocomposites as energy storage material and open an avenue to fabricate the next generation supercapacitors with both high power and energy densities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  12. Algorithm for fast event parameters estimation on GEM acquired data

    NASA Astrophysics Data System (ADS)

    Linczuk, Paweł; Krawczyk, Rafał D.; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Chernyshova, Maryna; Czarski, Tomasz

    2016-09-01

    We present study of a software-hardware environment for developing fast computation with high throughput and low latency methods, which can be used as back-end in High Energy Physics (HEP) and other High Performance Computing (HPC) systems, based on high amount of input from electronic sensor based front-end. There is a parallelization possibilities discussion and testing on Intel HPC solutions with consideration of applications with Gas Electron Multiplier (GEM) measurement systems presented in this paper.

  13. Hierarchically porous carbon with high-speed ion transport channels for high performance supercapacitors

    NASA Astrophysics Data System (ADS)

    Lu, Haoyuan; Li, Qingwei; Guo, Jianhui; Song, Aixin; Gong, Chunhong; Zhang, Jiwei; Zhang, Jingwei

    2018-01-01

    Hierarchically porous carbons (HPC) are considered as promising electrode materials for supercapacitors, due to their outstanding charge/discharge cycling stabilities and high power densities. However, HPC possess a relatively low ion diffusion rate inside the materials, which challenges their application for high performance supercapacitor. Thus tunnel-shaped carbon pores with a size of tens of nanometers were constructed by inducing the self-assembly of lithocholic acid with ammonium chloride, thereby providing high-speed channels for internal ion diffusion. The as-formed one-dimensional pores are beneficial to the activation process by KOH, providing a large specific surface area, and then facilitate rapid transport of electrolyte ions from macropores to the microporous surfaces. Therefore, the HPC achieve an outstanding gravimetric capacitance of 284 F g-1 at a current density of 0.1 A g-1 and a remarkable capacity retention of 64.8% when the current density increases by 1000 times to 100 A g-1.

  14. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    NASA Astrophysics Data System (ADS)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  15. NREL Evaluates Aquarius Liquid-Cooled High-Performance Computing Technology

    Science.gov Websites

    HPC and influence the modern data center designer towards adoption of liquid cooling. Our shared technology. Aquila and Sandia chose NREL's HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling, along with the required instrumentation to

  16. Expanding HPC and Research Computing--The Sustainable Way

    ERIC Educational Resources Information Center

    Grush, Mary

    2009-01-01

    Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…

  17. HPC Aspects of Variable-Resolution Global Climate Modeling using a Multi-scale Convection Parameterization

    EPA Science Inventory

    High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...

  18. Ventral, but not dorsal, hippocampus inactivation impairs reward memory expression and retrieval in contexts defined by proximal cues.

    PubMed

    Riaz, Sadia; Schumacher, Anett; Sivagurunathan, Seyon; Van Der Meer, Matthijs; Ito, Rutsuko

    2017-07-01

    The hippocampus (HPC) has been widely implicated in the contextual control of appetitive and aversive conditioning. However, whole hippocampal lesions do not invariably impair all forms of contextual processing, as in the case of complex biconditional context discrimination, leading to contention over the exact nature of the contribution of the HPC in contextual processing. Moreover, the increasingly well-established functional dissociation between the dorsal (dHPC) and ventral (vHPC) subregions of the HPC has been largely overlooked in the existing literature on hippocampal-based contextual memory processing in appetitively motivated tasks. Thus, the present study sought to investigate the individual roles of the dHPC and the vHPC in contextual biconditional discrimination (CBD) performance and memory retrieval. To this end, we examined the effects of transient post-acquisition pharmacological inactivation (using a combination of GABA A and GABA B receptor agonists muscimol and baclofen) of functionally distinct subregions of the HPC (CA1/CA3 subfields of the dHPC and vHPC) on CBD memory retrieval. Additional behavioral assays including novelty preference, light-dark box and locomotor activity test were also performed to confirm that the respective sites of inactivation were functionally silent. We observed robust deficits in CBD performance and memory retrieval following inactivation of the vHPC, but not the dHPC. Our data provides novel insight into the differential roles of the ventral and dorsal HPC in reward contextual processing, under conditions in which the context is defined by proximal cues. © 2017 Wiley Periodicals, Inc.

  19. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  20. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  1. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less

  2. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  3. On the energy footprint of I/O management in Exascale HPC systems

    DOE PAGES

    Dorier, Matthieu; Yildiz, Orcun; Ibrahim, Shadi; ...

    2016-03-21

    The advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. But, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. Tomore » accomplish this, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid'5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. This proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches.« less

  4. Fabricating hierarchically porous carbon with well-defined open pores via polymer dehalogenation for high-performance supercapacitor

    NASA Astrophysics Data System (ADS)

    Guo, Mei; Li, Yu; Du, Kewen; Qiu, Chaochao; Dou, Gang; Zhang, Guoxin

    2018-05-01

    Improving specific energy of supercapacitors (SCs) at high power has been intensively investigated as a hot and challengeable topic. In this work, hierarchically porous carbon (HPC) materials with well-defined meso-/macro-pores are reported via the dehalogenation reaction of polyvinyl fluoride (PVDF) by NaNH2. The pore hierarchy is achievable mainly because of the coupled effects of NaNH2 activation and the template/bubbling effects of byproducts of NaF and NH3. Electron microscopy studies and Brunauer-Emmett-Teller (BET) measurements confirm that the structures of HPC samples contain multiple-scale pores assembled in a hierarchical pattern, and most of their volumes are contributed by mesopores. Aqueous symmetric supercapacitors (ASSCs) were fabricated using HPC-M7 materials, achieving an ultrahigh specific energy of 18.8 Wh kg-1 at specific power of 986.8 W kg-1. Remarkably, at the ultrahigh power of 14.3 kW kg-1, the HPC-ASSCs still output a very high specific energy of 16.7 Wh kg-1, which means the ASSCs can be charged or discharged within 4 s. The outstanding rate capacitive performance is mainly benefited from the hierarchical porous structure that allows highly efficient ion diffusion.

  5. Data Services in Support of High Performance Computing-Based Distributed Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Gichamo, T.; Yildirim, A. A.; Jones, N.

    2014-12-01

    We have developed web-based data services to support the application of hydrologic models on High Performance Computing (HPC) systems. The purposes of these services are to provide hydrologic researchers, modelers, water managers, and users access to HPC resources without requiring them to become HPC experts and understanding the intrinsic complexities of the data services, so as to reduce the amount of time and effort spent in finding and organizing the data required to execute hydrologic models and data preprocessing tools on HPC systems. These services address some of the data challenges faced by hydrologic models that strive to take advantage of HPC. Needed data is often not in the form needed by such models, requiring researchers to spend time and effort on data preparation and preprocessing that inhibits or limits the application of these models. Another limitation is the difficult to use batch job control and queuing systems used by HPC systems. We have developed a REST-based gateway application programming interface (API) for authenticated access to HPC systems that abstracts away many of the details that are barriers to HPC use and enhances accessibility from desktop programming and scripting languages such as Python and R. We have used this gateway API to establish software services that support the delineation of watersheds to define a modeling domain, then extract terrain and land use information to automatically configure the inputs required for hydrologic models. These services support the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation and generation of hydrology-based terrain information such as wetness index and stream networks. These services also support the derivation of inputs for the Utah Energy Balance snowmelt model used to address questions such as how climate, land cover and land use change may affect snowmelt inputs to runoff generation. To enhance access to the time varying climate data used to drive hydrologic models, we have developed services to downscale and re-grid nationally available climate analysis data from systems such as NLDAS and MERRA. These cases serve as examples for how this approach can be extended to other models to enhance the use of HPC for hydrologic modeling.

  6. MRS study of meningeal hemangiopericytoma and edema: a comparison with meningothelial meningioma.

    PubMed

    Righi, Valeria; Tugnoli, Vitaliano; Mucci, Adele; Bacci, Antonella; Bonora, Sergio; Schenetti, Luisa

    2012-10-01

    Intracranial hemangiopericytomas (HPCs) are rare tumors and their radiological appearance resembles that of meningiomas, especially meningothelial meningiomas. To increase the knowledge on the biochemical composition of this type of tumor for better diagnosis and prognosis, we performed a molecular study using ex vivo high resolution magic angle spinning (HR-MAS) magnetic resonance spectroscopy (MRS) perfomed on HPC and peritumoral edematous tissues. Moreover, to help in the discrimination between HPC and meningothelial meningioma we compared the ex vivo HR-MAS spectra of samples from one patient with HPC and 5 patients affected by meningothelial meningioma. Magnetic resonance imaging (MRI), in vivo localized single voxel 1H-MRS was also performed on the same patients prior to surgery and the in vivo and ex vivo MRS spectra were compared. We observed the presence of OH-butyrate, together with glucose in HPC and a low amount of N-acetylaspartate in the edema, that may reflect neuronal alteration responsible for associated epilepsy. Many differences between HPC and meningothelial meningioma were identified. The relative ratios of myo-inositol, glucose and gluthatione with respect to glutamate are higher in HPC compared to meningioma; whereas the relative ratios of creatine, glutamine, alanine, glycine and choline-containing compounds with respect to glutamate are lower in HPC compared to meningioma. These data will be useful to improve the interpretation of in vivo MRS spectra resulting in a more accurate diagnosis of these rare tumors.

  7. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  8. The impact of the U.S. supercomputing initiative will be global

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  9. System Resource Allocation Requests | High-Performance Computing | NREL

    Science.gov Websites

    Account to utilize the online allocation request system. If you need a HPC User Account, please request one online: Visit User Accounts. Click the green "Request Account" Button - this will direct . Follow the online instructions provided in the DocuSign form. Write "Need HPC User Account to use

  10. High Performance Computing (HPC) Innovation Service Portal Pilots Cloud Computing (HPC-ISP Pilot Cloud Computing)

    DTIC Science & Technology

    2011-08-01

    5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klitsner, Tom

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  12. Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Canon, Shane

    2018-01-24

    DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unat, Didem; Dubey, Anshu; Hoefler, Torsten

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity andmore » performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. Furthermore, this paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less

  15. Exploring the capabilities of support vector machines in detecting silent data corruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo

    As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less

  16. Exploring the capabilities of support vector machines in detecting silent data corruptions

    DOE PAGES

    Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo; ...

    2018-02-01

    As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less

  17. Final Report on the Proposal to Provide Asian Science and Technology Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahaner, David K.

    2003-07-23

    The Asian Technology Information Program (ATIP) conducted a seven-month Asian science and technology information program for the Office:of Energy Research (ER), U.S: Department of Energy (DOE.) The seven-month program consists of 1) monitoring, analyzing, and dissemiuating science and technology trends and developments associated with Asian high performance computing and communications (HPC), networking, and associated topics, 2) access to ATIP's annual series of Asian S&T reports for ER and HPC related personnel and, 3) supporting DOE and ER designated visits to Asia to study and assess Asian HPC.

  18. Linux VPN Set Up | High-Performance Computing | NREL

    Science.gov Websites

    methods to connect to NREL's HPC systems via the HPC VPN: one using a simple command line, and a second UserID in place of the one in the example image. Connection name: hpcvpn Gateway: hpcvpn.nrel.gov User hpcvpn option as seen in the following screen shot. Screenshot image NetworkManager will present you with

  19. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE PAGES

    Engelmann, Christian; Hukerikar, Saurabh

    2017-09-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  20. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Hukerikar, Saurabh

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  1. 2013 R&D 100 Award: ‘Miniapps’ Bolster High Performance Computing

    ScienceCinema

    Belak, Jim; Richards, David

    2018-06-12

    Two Livermore computer scientists served on a Sandia National Laboratories-led team that developed Mantevo Suite 1.0, the first integrated suite of small software programs, also called "miniapps," to be made available to the high performance computing (HPC) community. These miniapps facilitate the development of new HPC systems and the applications that run on them. Miniapps (miniature applications) serve as stripped down surrogates for complex, full-scale applications that can require a great deal of time and effort to port to a new HPC system because they often consist of hundreds of thousands of lines of code. The miniapps are a prototype that contains some or all of the essentials of the real application but with many fewer lines of code, making the miniapp more versatile for experimentation. This allows researchers to more rapidly explore options and optimize system design, greatly improving the chances the full-scale application will perform successfully. These miniapps have become essential tools for exploring complex design spaces because they can reliably predict the performance of full applications.

  2. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  3. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  4. Coupled ultrasonication-milling synthesis of hierarchically porous carbon for high-performance supercapacitor.

    PubMed

    Yang, Dewei; Jing, Huijuan; Wang, Zhaowu; Li, Jiaheng; Hu, Mingxiang; Lv, Ruitao; Zhang, Rui; Chen, Deliang

    2018-05-19

    Activated carbon (AC) based supercapacitors exhibit intrinsic advantages in energy storage. Traditional two-step synthesis (carbonization and activation) of AC faces difficulties in precisely regulating its pore-size distribution and thoroughly removing residual impurities like silicon oxide. This paper reports a novel coupled ultrasonication-milling (CUM) process for the preparation of hierarchically porous carbon (HPC) using corn cobs as the carbon resource. The as-obtained HPC is of a large surface area (2288 m 2  g -1 ) with a high mesopore ratio of ∼44.6%. When tested in a three-electrode system, the HPC exhibits a high specific capacitance of 465 F g -1 at 0.5 Ag -1 , 2.7 times higher than that (170 F g -1 ) of the commercial AC (YP-50F). In the two-electrode test system, the HPC device exhibits a specific capacitance of 135 F g -1 at 1 A g -1 , twice higher than that (68 F g -1 ) of YP-50F. The above excellent energy-storage properties are resulted from the CUM process which efficiently removes the impurities and modulates the mesopore/micropore structures of the AC samples derived from the agricultural resides of corn cobs. The CUM process is an efficient method to prepare high-performance biomass-derived AC materials. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthik, Rajasekar

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less

  6. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  7. Avi Purkayastha | NREL

    Science.gov Websites

    Austin, from 2001 to 2007. There he was principal in HPC applications and user support, as well as in research and development in large-scale scientific applications and different HPC systems and technologies Interests HPC applications performance and optimizations|HPC systems and accelerator technologies|Scientific

  8. Coupled hydro-meteorological modelling on a HPC platform for high-resolution extreme weather impact study

    NASA Astrophysics Data System (ADS)

    Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian

    2016-11-01

    Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.

  9. Strengthening LLNL Missions through Laboratory Directed Research and Development in High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willis, D. K.

    2016-12-01

    High performance computing (HPC) has been a defining strength of Lawrence Livermore National Laboratory (LLNL) since its founding. Livermore scientists have designed and used some of the world’s most powerful computers to drive breakthroughs in nearly every mission area. Today, the Laboratory is recognized as a world leader in the application of HPC to complex science, technology, and engineering challenges. Most importantly, HPC has been integral to the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship Program—designed to ensure the safety, security, and reliability of our nuclear deterrent without nuclear testing. A critical factor behind Lawrence Livermore’s preeminence in HPC ismore » the ongoing investments made by the Laboratory Directed Research and Development (LDRD) Program in cutting-edge concepts to enable efficient utilization of these powerful machines. Congress established the LDRD Program in 1991 to maintain the technical vitality of the Department of Energy (DOE) national laboratories. Since then, LDRD has been, and continues to be, an essential tool for exploring anticipated needs that lie beyond the planning horizon of our programs and for attracting the next generation of talented visionaries. Through LDRD, Livermore researchers can examine future challenges, propose and explore innovative solutions, and deliver creative approaches to support our missions. The present scientific and technical strengths of the Laboratory are, in large part, a product of past LDRD investments in HPC. Here, we provide seven examples of LDRD projects from the past decade that have played a critical role in building LLNL’s HPC, computer science, mathematics, and data science research capabilities, and describe how they have impacted LLNL’s mission.« less

  10. Internal curing of high-performance concrete for bridge decks.

    DOT National Transportation Integrated Search

    2013-03-01

    High performance concrete (HPC) provides a long lasting, durable concrete that is typically used in bridge decks due to its low permeability, high abrasion resistance, freeze-thaw resistance and strength. However, this type of concrete is highly susc...

  11. Power-Time Curve Comparison between Weightlifting Derivatives

    PubMed Central

    Suchomel, Timothy J.; Sole, Christopher J.

    2017-01-01

    This study examined the power production differences between weightlifting derivatives through a comparison of power-time (P-t) curves. Thirteen resistance-trained males performed hang power clean (HPC), jump shrug (JS), and hang high pull (HHP) repetitions at relative loads of 30%, 45%, 65%, and 80% of their one repetition maximum (1RM) HPC. Relative peak power (PPRel), work (WRel), and P-t curves were compared. The JS produced greater PPRel than the HPC (p < 0.001, d = 2.53) and the HHP (p < 0.001, d = 2.14). In addition, the HHP PPRel was statistically greater than the HPC (p = 0.008, d = 0.80). Similarly, the JS produced greater WRel compared to the HPC (p < 0.001, d = 1.89) and HHP (p < 0.001, d = 1.42). Furthermore, HHP WRel was statistically greater than the HPC (p = 0.003, d = 0.73). The P-t profiles of each exercise were similar during the first 80-85% of the movement; however, during the final 15-20% of the movement the P-t profile of the JS was found to be greater than the HPC and HHP. The JS produced greater PPRel and WRel compared to the HPC and HHP with large effect size differences. The HHP produced greater PPRel and WRel than the HPC with moderate effect size differences. The JS and HHP produced markedly different P-t profiles in the final 15-20% of the movement compared to the HPC. Thus, these exercises may be superior methods of training to enhance PPRel. The greatest differences in PPRel between the JS and HHP and the HPC occurred at lighter loads, suggesting that loads of 30-45% 1RM HPC may provide the best training stimulus when using the JS and HHP. In contrast, loads ranging 65-80% 1RM HPC may provide an optimal stimulus for power production during the HPC. Key points The JS and HHP exercises produced greater relative peak power and relative work compared to the HPC. Although the power-time curves were similar during the first 80-85% of the movement, the JS and HHP possessed unique power-time characteristics during the final 15-20% of the movement compared to the HPC. The JS and HHP may be effectively implemented to train peak power characteristics, especially using loads ranging from 30-45% of an individual’s 1RM HPC. The HPC may be best implemented using loads ranging from 65-80% of an individual’s 1RM HPC. PMID:28912659

  12. Getting ready for petaflop capacities and beyond: a utility perspective

    NASA Astrophysics Data System (ADS)

    Hamelin, J. F.; Berthou, J. Y.

    2008-07-01

    Why should EDF, the leading producer and marketer of electricity in Europe, start adding teraflops to its terawatt-hours and become involved in high-performance computing (HPC)? In this paper we answer this question through examples of major opportunities that HPC brings to our business today and, we hope well into the future of petaflop and exaflop computing. Five cases are presented dealing with nondestructive testing, nuclear fuel management, mechanical behavior of nuclear fuel assemblies, water management, and energy management. For each case we show the benefits brought by HPC, describe the current level of numerical simulation performance, and discuss the perspectives for future steps. We also present the general background that explains why EDF is moving to this technology and briefly comment on the development of user-oriented simulation platforms.

  13. HPC enabled real-time remote processing of laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.

    2016-03-01

    Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.

  14. Cognitive Model Exploration and Optimization: A New Challenge for Computational Science

    DTIC Science & Technology

    2010-03-01

    the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post

  15. HPC Access Using KVM over IP

    DTIC Science & Technology

    2007-06-08

    Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems

  16. Synergistic effect of Nitrogen-doped hierarchical porous carbon/graphene with enhanced catalytic performance for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Kong, Dewang; Yuan, Wenjing; Li, Cun; Song, Jiming; Xie, Anjian; Shen, Yuhua

    2017-01-01

    Developing efficient and economical catalysts for the oxygen reduction reaction (ORR) is important to promote the commercialization of fuel cells. Here, we report a simple and environmentally friendly method to prepare nitrogen (N) -doped hierarchical porous carbon (HPC)/reduced graphene oxide (RGO) composites by reusing waste biomass (pomelo peel) coupled with graphene oxide (GO). This method is green, low-cost and without using any acid or alkali activator. The typical sample (N-HPC/RGO-1) contains 5.96 at.% nitrogen and larger BET surface area (1194 m2/g). Electrochemical measurements show that N-HPC/RGO-1 exhibits not only a relatively positive onset potential and high current density, but also considerable methanol tolerance and long-term durability in alkaline media as well as in acidic media. The electron transfer number is close to 4, which means that it is mostly via a four-electron pathway toward ORR. The excellent catalytic performance of N-HPC/RGO-1 is due to the synergistic effect of the inherent interwoven network structure of HPC, the good electrical conductivity of RGO, and the heteroatom doping for the composite. More importantly, this work demonstrates a good example for turning discarded rubbish into valuable functional products and addresses the disposal issue of waste biomass simultaneously for environment clean.

  17. Running Jobs on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts

  18. Anterior hippocampal dysconnectivity in posttraumatic stress disorder: a dimensional and multimodal approach.

    PubMed

    Abdallah, C G; Wrocklage, K M; Averill, C L; Akiki, T; Schweinsburg, B; Roy, A; Martini, B; Southwick, S M; Krystal, J H; Scott, J C

    2017-02-28

    The anterior hippocampus (aHPC) has a central role in the regulation of anxiety-related behavior, stress response, emotional memory and fear. However, little is known about the presence and extent of aHPC abnormalities in posttraumatic stress disorder (PTSD). In this study, we used a multimodal approach, along with graph-based measures of global brain connectivity (GBC) termed functional GBC with global signal regression (f-GBCr) and diffusion GBC (d-GBC), in combat-exposed US Veterans with and without PTSD. Seed-based aHPC anatomical connectivity analyses were also performed. A whole-brain voxel-wise data-driven investigation revealed a significant association between elevated PTSD symptoms and reduced medial temporal f-GBCr, particularly in the aHPC. Similarly, aHPC d-GBC negatively correlated with PTSD severity. Both functional and anatomical aHPC dysconnectivity measures remained significant after controlling for hippocampal volume, age, gender, intelligence, education, combat severity, depression, anxiety, medication status, traumatic brain injury and alcohol/substance comorbidities. Depression-like PTSD dimensions were associated with reduced connectivity in the ventromedial and dorsolateral prefrontal cortex. In contrast, hyperarousal symptoms were positively correlated with ventromedial and dorsolateral prefrontal connectivity. We believe the findings provide first evidence of functional and anatomical dysconnectivity in the aHPC of veterans with high PTSD symptomatology. The data support the putative utility of aHPC connectivity as a measure of overall PTSD severity. Moreover, prefrontal global connectivity may be of clinical value as a brain biomarker to potentially distinguish between PTSD subgroups.

  19. Fingerprinting Communication and Computation on HPC Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean

    2010-06-02

    How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less

  20. Data Transfer Study HPSS Archiving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, James; Parete-Koon, Suzanne T; Mitchell, Quinn

    2015-01-01

    The movement of the large amounts of data produced by codes run in a High Performance Computing (HPC) environment can be a bottleneck for project workflows. To balance filesystem capacity and performance requirements, HPC centers enforce data management policies to purge old files to make room for new computation and analysis results. Users at Oak Ridge Leadership Computing Facility (OLCF) and many other HPC user facilities must archive data to avoid data loss during purges, therefore the time associated with data movement for archiving is something that all users must consider. This study observed the difference in transfer speed frommore » the originating location on the Lustre filesystem to the more permanent High Performance Storage System (HPSS). The tests were done with a number of different transfer methods for files that spanned a variety of sizes and compositions that reflect OLCF user data. This data will be used to help users of Titan and other Cray supercomputers plan their workflow and data transfers so that they are most efficient for their project. We will also discuss best practice for maintaining data at shared user facilities.« less

  1. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less

  2. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less

  3. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  4. High-Throughput and Low-Latency Network Communication with NetIO

    NASA Astrophysics Data System (ADS)

    Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer

    2017-10-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.

  5. Differential Acetylcholine Release in the Prefrontal Cortex and Hippocampus During Pavlovian Trace and Delay Conditioning

    PubMed Central

    Flesher, M. Melissa; Butt, Allen E.; Kinney-Hurd, Brandee L.

    2011-01-01

    Pavlovian trace conditioning critically depends on the medial prefrontal cortex (mPFC) and hippocampus (HPC), whereas delay conditioning does not depend on these brain structures. Given that the cholinergic basal forebrain system modulates activity in both the mPFC and HPC, it was reasoned that the level of acetylcholine (ACh) release in these regions would show distinct profiles during testing in trace and delay conditioning paradigms. To test this assumption, microdialysis probes were implanted unilaterally into the mPFC and HPC of rats that were pre-trained in appetitive trace and delay conditioning paradigms using different conditional stimuli in the two tasks. On the day of microdialysis testing, dialysate samples were collected during a quiet baseline interval before trials were initiated, and again during performance in separate blocks of trace and delay conditioning trials in each animal. ACh levels were quantified using high performance liquid chromatography and electrochemical detection techniques. Consistent with our hypothesis, results showed that ACh release in the mPFC was greater during trace conditioning than during delay conditioning. The level of ACh released during trace conditioning in the HPC was also greater than the levels observed during delay conditioning. While ACh efflux in both the mPFC and HPC selectively increased during trace conditioning, ACh levels in the mPFC during trace conditioning testing showed the greatest increases observed. These results demonstrate a dissociation in cholinergic activation of the mPFC and HPC during performance in trace but not delay appetitive conditioning, where this cholinergic activity may contribute to attentional mechanisms, adaptive response timing, or memory consolidation necessary for successful trace conditioning. PMID:21514394

  6. Directional hippocampal-prefrontal interactions during working memory.

    PubMed

    Liu, Tiaotiao; Bai, Wenwen; Xia, Mi; Tian, Xin

    2018-02-15

    Working memory refers to a system that is essential for performing complex cognitive tasks such as reasoning, comprehension and learning. Evidence shows that hippocampus (HPC) and prefrontal cortex (PFC) play important roles in working memory. The HPC-PFC interaction via theta-band oscillatory synchronization is critical for successful execution of working memory. However, whether one brain region is leading or lagging relative to another is still unclear. Therefore, in the present study, we simultaneously recorded local field potentials (LFPs) from rat ventral hippocampus (vHPC) and medial prefrontal cortex (mPFC) and while the rats performed a Y-maze working memory task. We then applied instantaneous amplitudes cross-correlation method to calculate the time lag between PFC and vHPC to explore the functional dynamics of the HPC-PFC interaction. Our results showed a strong lead from vHPC to mPFC preceded an animal's correct choice during the working memory task. These findings suggest the vHPC-leading interaction contributes to the successful execution of working memory. Copyright © 2017. Published by Elsevier B.V.

  7. High-performance asymmetric supercapacitors based on multilayer MnO2 /graphene oxide nanoflakes and hierarchical porous carbon with enhanced cycling stability.

    PubMed

    Zhao, Yufeng; Ran, Wei; He, Jing; Huang, Yizhong; Liu, Zhifeng; Liu, Wei; Tang, Yongfu; Zhang, Long; Gao, Dawei; Gao, Faming

    2015-03-18

    In this work, MnO(2)/GO (graphene oxide) composites with novel multilayer nanoflake structure, and a carbon material derived from Artemia cyst shell with genetic 3D hierarchical porous structure (HPC), are prepared. An asymmetric supercapacitor has been fabricated using MnO(2)/GO as positive electrode and HPC as negative electrode material. Because of their unique structures, both MnO(2)/GO composites and HPC exhibit excellent electrochemical performances. The optimized asymmetric supercapacitor could be cycled reversibly in the high voltage range of 0-2 V in aqueous electrolyte, which exhibits maximum energy density of 46.7 Wh kg(-1) at a power density of 100 W kg(-1) and remains 18.9 Wh kg(-1) at 2000 W kg(-1). Additionally, such device also shows superior long cycle life along with ∼100% capacitance retention after 1000 cycles and ∼93% after 4000 cycles. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    PubMed

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  9. IN13B-1660: Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Technical Reports Server (NTRS)

    Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris

    2016-01-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  10. Analytics and Visualization Pipelines for Big ­Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.

    2016-12-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  11. Implementation program on high performance concrete: guidelines for instrumentation on bridges

    DOT National Transportation Integrated Search

    1996-08-01

    This report provides an outline for the instrumentation of bridges being constructed under the Federal Highway Administration's (FHWA's) Strategic Highway Research Program (SHRP) implementation effort in High Performance Concrete (HPC). The report de...

  12. High Performance Concrete (HPC) bridge project for SR 43.

    DOT National Transportation Integrated Search

    2012-10-01

    The objective of this research was to develop and test high performance concrete mixtures, made of locally available materials, having : durability characteristics that far exceed those of conventional concrete mixtures. Based on the results from the...

  13. Havery Mudd 2014-2015 Computer Science Conduit Clinic Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aspesi, G; Bai, J; Deese, R

    2015-05-12

    Conduit, a new open-source library developed at Lawrence Livermore National Laboratories, provides a C++ application programming interface (API) to describe and access scientific data. Conduit’s primary use is for inmemory data exchange in high performance computing (HPC) applications. Our team tested and improved Conduit to make it more appealing to potential adopters in the HPC community. We extended Conduit’s capabilities by prototyping four libraries: one for parallel communication using MPI, one for I/O functionality, one for aggregating performance data, and one for data visualization.

  14. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  15. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE PAGES

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...

    2016-11-01

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  16. Computational Science News | Computational Science | NREL

    Science.gov Websites

    -Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC

  17. High-Performance Computing Data Center Warm-Water Liquid Cooling |

    Science.gov Websites

    Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective

  18. High performance concrete in a bridge in Richlands, Virginia

    DOT National Transportation Integrated Search

    1999-09-01

    The Virginia Department of Transportation built a high-performance concrete (HPC) bridge with high-strength and low-permeability concrete in Richlands. The beams had a minimum compressive strength of 69 MPa (10,000 psi) at 28 days and large, 15 mm (0...

  19. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  20. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE PAGES

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...

    2018-02-05

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  1. Hierarchical Pore-Patterned Carbon Electrodes for High-Volumetric Energy Density Micro-Supercapacitors.

    PubMed

    Kim, Cheolho; Moon, Jun Hyuk

    2018-06-13

    Micro-supercapacitors (MSCs) are attractive for applications in next-generation mobile and wearable devices and have the potential to complement or even replace lithium batteries. However, many previous MSCs have often exhibited a low volumetric energy density with high-loading electrodes because of the nonuniform pore structure of the electrodes. To address this issue, we introduced a uniform-pore carbon electrode fabricated by 3D interference lithography. Furthermore, a hierarchical pore-patterned carbon (hPC) electrode was formed by introducing a micropore by chemical etching into the macropore carbon skeleton. The hPC electrodes were applied to solid-state MSCs. We achieved a constant volumetric capacitance and a corresponding volumetric energy density for electrodes of various thicknesses. The hPC MSC reached a volumetric energy density of approximately 1.43 mW h/cm 3 . The power density of the hPC MSC was 1.69 W/cm 3 . We could control the capacitance and voltage additionally by connecting the unit MSC cells in series or parallel, and we confirmed the operation of a light-emitting diode. We believe that our pore-patterned electrodes will provide a new platform for compact but high-performance energy storage devices.

  2. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  3. System Connection via SSH Gateway | High-Performance Computing | NREL

    Science.gov Websites

    ;@peregrine.hpc.nrel.gov First time logging in? If this is the first time you've logged in with your new account, you will password. You will be prompted to enter it a second time, then you will be logged off. Just reconnect with your HPC password at any time, you can simply use the passwd command. Remote Users If you're connecting

  4. Alkylphosphocholines: influence of structural variation on biodistribution at antineoplastically active concentrations.

    PubMed

    Kötting, J; Berger, M R; Unger, C; Eibl, H

    1992-01-01

    Hexadecylphosphocholine (HPC) and octadecylphosphocholine (OPC) show very potent antitumor activity against autochthonous methylnitrosourea-induced mammary carcinomas in rats. The longer-chain and unsaturated homologue erucylphosphocholine (EPC) forms lamellar structures rather than micelles, but nonetheless exhibits antineoplastic activity. Methylnitrosourea was used in the present study to induce autochthonous mammary carcinomas in virgin Sprague-Dawley rats. At 6 and 11 days following oral therapy, the biodistribution of HPC, OPC and EPC was analyzed in the serum, tumor, liver, kidney, lung, small intestine, brain and spleen of rats by high-performance thin-layer chromatography. In contrast to the almost identical tumor response noted, the distribution of the three homologues differed markedly. The serum levels of 50 nmol/ml obtained for OPC and EPC were much lower than the value of 120 nmol/ml measured for HPC. Nevertheless, the quite different serum levels resulted in similar tumor concentrations of about 200 nmol/g for all three of the compounds. Whereas HPC preferably accumulated in the kidney (1 mumol/g), OPC was found at increased concentrations (400 nmol/g) in the spleen, kidney and lung. In spite of the high daily dose of 120 mumol/kg EPC as compared with 51 mumol/kg HPC or OPC, EPC concentrations (100-200 nmol/g) were low in most tissues. High EPC concentrations were found in the small intestine (628 nmol/g). Values of 170 nmol/g were found for HPC and OPC in the brain, whereas the EPC concentration was 120 nmol/g. Obviously, structural modifications in the alkyl chain strongly influence the distribution pattern of alkylphosphocholines in animals. Since EPC yielded the highest tissue-to-serum concentration ratio in tumor tissue (5.1) and the lowest levels in other organs, we conclude that EPC is the most promising candidate for drug development in cancer therapy.

  5. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  6. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agelastos, Anthony; Allan, Benjamin; Brandt, Jim

    A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less

  8. Behavior of high-performance concrete in structural applications.

    DOT National Transportation Integrated Search

    2007-10-01

    High Performance Concrete (HPC) with improved properties has been developed by obtaining the maximum density of the matrix. Mathematical models developed by J.E. Funk and D.R. Dinger, are used to determine the particle size distribution to achieve th...

  9. High pressure homogenization to improve the stability of casein - hydroxypropyl cellulose aqueous systems.

    PubMed

    Ye, Ran; Harte, Federico

    2014-03-01

    The effect of high pressure homogenization on the improvement of the stability hydroxypropyl cellulose (HPC) and micellar casein was investigated. HPC with two molecular weights (80 and 1150 kDa) and micellar casein were mixed in water to a concentration leading to phase separation (0.45% w/v HPC and 3% w/v casein) and immediately subjected to high pressure homogenization ranging from 0 to 300 MPa, in 100 MPa increments. The various dispersions were evaluated for stability, particle size, turbidity, protein content, and viscosity over a period of two weeks and Scanning Transmission Electron Microscopy (STEM) at the end of the storage period. The stability of casein-HPC complexes was enhanced with the increasing homogenization pressure, especially for the complex containing high molecular weight HPC. The apparent particle size of complexes was reduced from ~200nm to ~130nm when using 300 MPa, corresponding to the sharp decrease of absorbance when compared to the non-homogenized controls. High pressure homogenization reduced the viscosity of HPC-casein complexes regardless of the molecular weight of HPC and STEM imagines revealed aggregates consistent with nano-scale protein polysaccharide interactions.

  10. High pressure homogenization to improve the stability of casein - hydroxypropyl cellulose aqueous systems

    PubMed Central

    Ye, Ran; Harte, Federico

    2013-01-01

    The effect of high pressure homogenization on the improvement of the stability hydroxypropyl cellulose (HPC) and micellar casein was investigated. HPC with two molecular weights (80 and 1150 kDa) and micellar casein were mixed in water to a concentration leading to phase separation (0.45% w/v HPC and 3% w/v casein) and immediately subjected to high pressure homogenization ranging from 0 to 300 MPa, in 100 MPa increments. The various dispersions were evaluated for stability, particle size, turbidity, protein content, and viscosity over a period of two weeks and Scanning Transmission Electron Microscopy (STEM) at the end of the storage period. The stability of casein-HPC complexes was enhanced with the increasing homogenization pressure, especially for the complex containing high molecular weight HPC. The apparent particle size of complexes was reduced from ~200nm to ~130nm when using 300 MPa, corresponding to the sharp decrease of absorbance when compared to the non-homogenized controls. High pressure homogenization reduced the viscosity of HPC-casein complexes regardless of the molecular weight of HPC and STEM imagines revealed aggregates consistent with nano-scale protein polysaccharide interactions. PMID:24159250

  11. Porous Carbon with Willow-Leaf-Shaped Pores for High-Performance Supercapacitors.

    PubMed

    Shi, Yanhong; Zhang, Linlin; Schon, Tyler B; Li, Huanhuan; Fan, Chaoying; Li, Xiaoying; Wang, Haifeng; Wu, Xinglong; Xie, Haiming; Sun, Haizhu; Seferos, Dwight S; Zhang, Jingping

    2017-12-13

    A novel kind of biomass-derived, high-oxygen-containing carbon material doped with nitrogen that has willow-leaf-shaped pores was synthesized. The obtained carbon material has an exotic hierarchical pore structure composed of bowl-shaped macropores, willow-leaf-shaped pores, and an abundance of micropores. This unique hierarchical porous structure provides an effective combination of high current densities and high capacitance because of a pseudocapacitive component that is afforded by the introduction of nitrogen and oxygen dopants. Our synthetic optimization allows further improvements in the performance of this hierarchical porous carbon (HPC) material by providing a high degree of control over the graphitization degree, specific surface area, and pore volume. As a result, a large specific surface area (1093 m 2 g -1 ) and pore volume (0.8379 cm 3 g -1 ) are obtained for HPC-650, which affords fast ion transport because of its short ion-diffusion pathways. HPC-650 exhibits a high specific capacitance of 312 F g -1 at 1 A g -1 , retaining 76.5% of its capacitance at 20 A g -1 . Moreover, it delivers an energy density of 50.2 W h kg -1 at a power density of 1.19 kW kg -1 , which is sufficient to power a yellow-light-emitting diode and operate a commercial scientific calculator.

  12. Use of high performance, high strength concrete (HPC) bulb-tee girders saves millions on I-10 twin span bridge in New Orleans district.

    DOT National Transportation Integrated Search

    2005-01-01

    History: LADOTD has been gradually introducing high performance, high strength concrete into its bridge construction program. At the same time, LTRC has been sponsoring research work to address design and construction issues related to the utilizatio...

  13. Towards Anatomic Scale Agent-Based Modeling with a Massively Parallel Spatially Explicit General-Purpose Model of Enteric Tissue (SEGMEnT_HPC)

    PubMed Central

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784

  14. Real-time measurements of temperature, pressure and moisture profiles in High-Performance Concrete exposed to high temperatures during neutron radiography imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toropovs, N., E-mail: nikolajs.toropovs@rtu.lv; Riga Technical University, Institute of Materials and Structures, Riga; Lo Monte, F.

    2015-02-15

    High-Performance Concrete (HPC) is particularly prone to explosive spalling when exposed to high temperature. Although the exact causes that lead to spalling are still being debated, moisture transport during heating plays an important role in all proposed mechanisms. In this study, slabs made of high-performance, low water-to-binder ratio mortars with addition of superabsorbent polymers (SAP) and polypropylene fibers (PP) were heated from one side on a temperature-controlled plate up to 550 °C. A combination of measurements was performed simultaneously on the same sample: moisture profiles via neutron radiography, temperature profiles with embedded thermocouples and pore pressure evolution with embedded pressuremore » sensors. Spalling occurred in the sample with SAP, where sharp profiles of moisture and temperature were observed. No spalling occurred when PP-fibers were introduced in addition to SAP. The experimental procedure described here is essential for developing and verifying numerical models and studying measures against fire spalling risk in HPC.« less

  15. HPC Annual Report 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennig, Yasmin

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less

  16. Preparation of a Co-doped hierarchically porous carbon from Co/Zn-ZIF: An efficient adsorbent for the extraction of trizine herbicides from environment water and white gourd samples.

    PubMed

    Jiao, Caina; Li, Menghua; Ma, Ruiyang; Wang, Chun; Wu, Qiuhua; Wang, Zhi

    2016-05-15

    A Co-doped hierarchically porous carbon (Co/HPC) was synthesized through a facile carbonization process by using Co/ZIF-8 as the precursor. The textures of the Co/HPC were investigated by scanning electron microscopy, transmission electron microscopy, X-ray diffraction, vibration sample magnetometry and nitrogen adsorption-desorption isotherms. The results showed that the Co/HPC is in good polyhedral shape with uniform size, sufficient magnetism, high surface area as well as hierarchical pores (micro-, meso- and macropores). To evaluate the extraction performance of the Co/HPC, it was applied as a magnetic adsorbent for the enrichment of triazine herbicides from environment water and white gourd samples prior to high performance liquid chromatographic analysis. The main parameters that affected the extraction efficiency were investigated. Under the optimum conditions, a good linearity for the four triazine herbicides was achieved with the correlation coefficients (r) higher than 0.9970. The limits of detection, based on S/N=3, were 0.02 ng/mL for water and 0.1-0.2 ng/g for white gourd samples, respectively. The recoveries of all the analytes for the method fell in the range from 80.3% to 120.6%. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. High-performance continuously reinforced concrete pavements in Richmond and Lynchburg, Virginia.

    DOT National Transportation Integrated Search

    2007-01-01

    This study evaluated the properties of two high performance concrete (HPC) paving projects in Virginia. These continuously reinforced concrete pavements were placed on State Route 288 near Richmond and on the U.S. 29 Madison Heights Bypass in Lynchbu...

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement ofmore » all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.« less

  19. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.

  20. GPU Implementation of High Rayleigh Number Three-Dimensional Mantle Convection

    NASA Astrophysics Data System (ADS)

    Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.

    2010-12-01

    Although we have entered the age of petascale computing, many factors are still prohibiting high-performance computing (HPC) from infiltrating all suitable scientific disciplines. For this reason and others, application of GPU to HPC is gaining traction in the scientific world. With its low price point, high performance potential, and competitive scalability, GPU has been an option well worth considering for the last few years. Moreover with the advent of NVIDIA's Fermi architecture, which brings ECC memory, better double-precision performance, and more RAM to GPU, there is a strong message of corporate support for GPU in HPC. However many doubts linger concerning the practicality of using GPU for scientific computing. In particular, GPU has a reputation for being difficult to program and suitable for only a small subset of problems. Although inroads have been made in addressing these concerns, for many scientists GPU still has hurdles to clear before becoming an acceptable choice. We explore the applicability of GPU to geophysics by implementing a three-dimensional, second-order finite-difference model of Rayleigh-Benard thermal convection on an NVIDIA GPU using C for CUDA. Our code reaches sufficient resolution, on the order of 500x500x250 evenly-spaced finite-difference gridpoints, on a single GPU. We make extensive use of highly optimized CUBLAS routines, allowing us to achieve performance on the order of O( 0.1 ) µs per timestep*gridpoint at this resolution. This performance has allowed us to study high Rayleigh number simulations, on the order of 2x10^7, on a single GPU.

  1. An interactive physics-based unmanned ground vehicle simulator leveraging open source gaming technology: progress in the development and application of the virtual autonomous navigation environment (VANE) desktop

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.

    2009-05-01

    It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.

  2. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  3. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  4. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  5. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  6. Development of a HIPAA-compliant environment for translational research data and analytics.

    PubMed

    Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C

    2014-01-01

    High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.

  7. Desktop supercomputer: what can it do?

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin

    The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less

  9. Fine-needle aspiration cytology of malignant hemangiopericytomas with ultrastructural and flow cytometric analyses.

    PubMed

    Geisinger, K R; Silverman, J F; Cappellari, J O; Dabbs, D J

    1990-07-01

    A hemangiopericytoma (HPC) is an uncommon soft-tissue neoplasm that may arise in many body sites. The cytologic features of fine-needle aspirates (FNAs) of HPCs have only rarely been described in the literature. We examined FNAs of malignant HPCs from the head and neck region (three) and the retroperitoneum (one) in four adults (aged 38 to 83 years). All four FNAs yielded cellular specimens that consisted of uninuclear tumor cells with high nuclear-cytoplasmic ratios. The cytomorphological spectrum included nuclei that were oval to elongate and had very finely granular, evenly distributed chromatin with one or two small but distinct nucleoli. Hemangiopericytomas yield aspirates that may be considered malignant and may suggest sarcoma. Histologically, all four neoplasms manifested high mitotic activity. The ultrastructural features of all four tumors were supportive of the diagnosis of HPC. Although a specific primary diagnosis of HPC on FNA of a soft-tissue mass is unlikely, cytologic analysis may allow diagnosis of recurrent or metastatic HPC. We were able to perform flow cytometric determinations of tumor DNA content on three of the resected neoplasms. In two, an aneuploid pattern was found, including the neoplasm with the most marked pleomorphism in the FNA. The third was diploid.

  10. Frequency-specific hippocampal-prefrontal interactions during associative learning

    PubMed Central

    Brincat, Scott L.; Miller, Earl K.

    2015-01-01

    Much of our knowledge of the world depends on learning associations (e.g., face-name), for which the hippocampus (HPC) and prefrontal cortex (PFC) are critical. HPC-PFC interactions have rarely been studied in monkeys, whose cognitive/mnemonic abilities are akin to humans. Here, we show functional differences and frequency-specific interactions between HPC and PFC of monkeys learning object-pair associations, an animal model of human explicit memory. PFC spiking activity reflected learning in parallel with behavioral performance, while HPC neurons reflected feedback about whether trial-and-error guesses were correct or incorrect. Theta-band HPC-PFC synchrony was stronger after errors, was driven primarily by PFC to HPC directional influences, and decreased with learning. In contrast, alpha/beta-band synchrony was stronger after correct trials, was driven more by HPC, and increased with learning. Rapid object associative learning may occur in PFC, while HPC may guide neocortical plasticity by signaling success or failure via oscillatory synchrony in different frequency bands. PMID:25706471

  11. Develop feedback system for intelligent dynamic resource allocation to improve application performance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann C.; Brandt, James M.; Tucker, Thomas

    2011-09-01

    This report provides documentation for the completion of the Sandia Level II milestone 'Develop feedback system for intelligent dynamic resource allocation to improve application performance'. This milestone demonstrates the use of a scalable data collection analysis and feedback system that enables insight into how an application is utilizing the hardware resources of a high performance computing (HPC) platform in a lightweight fashion. Further we demonstrate utilizing the same mechanisms used for transporting data for remote analysis and visualization to provide low latency run-time feedback to applications. The ultimate goal of this body of work is performance optimization in the facemore » of the ever increasing size and complexity of HPC systems.« less

  12. Webinar: Delivering Transformational HPC Solutions to Industry

    ScienceCinema

    Streitz, Frederick

    2018-01-16

    Dr. Frederick Streitz, director of the High Performance Computing Innovation Center, discusses Lawrence Livermore National Laboratory computational capabilities and expertise available to industry in this webinar.

  13. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  14. Spatial Support Vector Regression to Detect Silent Errors in the Exascale Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo

    As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs) or silent errors are one of the major sources that corrupt the executionresults of HPC applications without being detected. In this work, we explore a low-memory-overhead SDC detector, by leveraging epsilon-insensitive support vector machine regression, to detect SDCs that occur in HPC applications that can be characterized by an impact error bound. The key contributions are three fold. (1) Our design takes spatialfeatures (i.e., neighbouring data values for each data pointmore » in a snapshot) into training data, such that little memory overhead (less than 1%) is introduced. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show thatour detector can achieve the detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% of false positive rate for most cases. Our detector incurs low performance overhead, 5% on average, for all benchmarks studied in the paper. Compared with other state-of-the-art techniques, our detector exhibits the best tradeoff considering the detection ability and overheads.« less

  15. Research | Computational Science | NREL

    Science.gov Websites

    Research Research NREL's computational science experts use advanced high-performance computing (HPC technologies, thereby accelerating the transformation of our nation's energy system. Enabling High-Impact Research NREL's computational science capabilities enable high-impact research. Some recent examples

  16. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  17. Construction of the energy matrix for complex atoms. Part VIII: Hyperfine structure HPC calculations for terbium atom

    NASA Astrophysics Data System (ADS)

    Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy

    2017-11-01

    A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.

  18. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  19. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Koo, Michelle; Cao, Yu

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less

  20. Construction of crack-free bridge decks : technical summary.

    DOT National Transportation Integrated Search

    2017-04-01

    The report documents the performance of the decks based on crack surveys performed on the LC-HPC decks and : matching control bridge decks. The specifications for LC-HPC bridge decks, which cover aggregates, concrete, : and construction procedures, a...

  1. Appropriate Use Policy | High-Performance Computing | NREL

    Science.gov Websites

    users of the National Renewable Energy Laboratory (NREL) High Performance Computing (HPC) resources government agency, National Laboratory, University, or private entity, the intellectual property terms (if issued a multifactor token which may be a physical token or a virtual token used with one-time password

  2. Hierarchically Bicontinuous Porous Copper as Advanced 3D Skeleton for Stable Lithium Storage.

    PubMed

    Ke, Xi; Cheng, Yifeng; Liu, Jun; Liu, Liying; Wang, Naiguang; Liu, Jianping; Zhi, Chunyi; Shi, Zhicong; Guo, Zaiping

    2018-04-25

    Rechargeable lithium metal anodes (LMAs) with long cycling life have been regarded as the "Holy Grail" for high-energy-density lithium metal secondary batteries. The skeleton plays an important role in determining the performance of LMAs. Commercially available copper foam (CF) is not normally regarded as a suitable skeleton for stable lithium storage owing to its relatively inappropriate large pore size and relatively low specific surface area. Herein, for the first time, we revisit CF and address these issues by rationally designing a highly porous copper (HPC) architecture grown on CF substrates (HPC/CF) as a three-dimensional (3D) hierarchically bicontinuous porous skeleton through a novel approach combining the self-assembly of polystyrene microspheres, electrodeposition of copper, and a thermal annealing treatment. Compared to the CF skeleton, the HPC/CF skeleton exhibits a significantly improved Li plating/stripping behavior with high Coulombic efficiency (CE) and superior Li dendrite growth suppression. The 3D HPC/CF-based LMAs can run for 620 h without short-circuiting in a symmetric Li/Li@Cu cell at 0.5 mA cm -2 , and the Li@Cu/LiFePO 4 full cell exhibits a high reversible capacity of 115 mAh g -1 with a high CE of 99.7% at 2 C for 500 cycles. These results demonstrate the effectiveness of the design strategy of 3D hierarchically bicontinuous porous skeletons for developing stable and safe LMAs.

  3. Exposure to high ambient temperatures alters embryology in rabbits

    NASA Astrophysics Data System (ADS)

    García, M. L.; Argente, M. J.

    2017-09-01

    High ambient temperatures are a determining factor in the deterioration of embryo quality and survival in mammals. The aim of this study was to evaluate the effect of heat stress on embryo development, embryonic size and size of the embryonic coats in rabbits. A total of 310 embryos from 33 females in thermal comfort zone and 264 embryos of 28 females in heat stress conditions were used in the experiment. The traits studied were ovulation rate, percentage of total embryos, percentage of normal embryos, embryo area, zona pellucida thickness and mucin coat thickness. Traits were measured at 24 and 48 h post-coitum (hpc); mucin coat thickness was only measured at 48 hpc. The embryos were classified as zygotes or two-cell embryos at 24 hpc, and 16-cells or early morulae at 48 hpc. The ovulation rate was one oocyte lower in heat stress conditions than in thermal comfort. Percentage of normal embryos was lower in heat stress conditions at 24 hpc (17.2%) and 48 hpc (13.2%). No differences in percentage of zygotes or two-cell embryos were found at 24 hpc. The embryo development and area was affected by heat stress at 48 hpc (10% higher percentage of 16-cells and 883 μm2 smaller, respectively). Zona pellucida was thicker under thermal stress at 24 hpc (1.2 μm) and 48 hpc (1.5 μm). No differences in mucin coat thickness were found. In conclusion, heat stress appears to alter embryology in rabbits.

  4. 75 FR 78881 - Airworthiness Directives; Pratt & Whitney PW4000 Series Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ... slots on the 10th stage disk of the high-pressure compressor (HPC) drum rotor disk assembly. This AD... with a ring case configuration rear high-pressure compressor (HPC) installed, that includes a 9th stage... remove the low-pressure turbine shaft, or overhaul the HPC. Most operators will incur no additional costs...

  5. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  6. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  7. Computational Science and Innovation

    NASA Astrophysics Data System (ADS)

    Dean, D. J.

    2011-09-01

    Simulations - utilizing computers to solve complicated science and engineering problems - are a key ingredient of modern science. The U.S. Department of Energy (DOE) is a world leader in the development of high-performance computing (HPC), the development of applied math and algorithms that utilize the full potential of HPC platforms, and the application of computing to science and engineering problems. An interesting general question is whether the DOE can strategically utilize its capability in simulations to advance innovation more broadly. In this article, I will argue that this is certainly possible.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Sewell, Christopher; Usher, William

    Here, one of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Sewell, Christopher; Usher, William

    Execution on massively threaded processors is one of the most critical challenges for high-performance computing (HPC) scientific visualization. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Moreover, our current production scientific visualization software is not designed for these new types of architectures. In order to address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  10. Shifter: Containers for HPC

    NASA Astrophysics Data System (ADS)

    Gerhardt, Lisa; Bhimji, Wahid; Canon, Shane; Fasel, Markus; Jacobsen, Doug; Mustafa, Mustafa; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    Bringing HEP computing to HPC can be difficult. Software stacks are often very complicated with numerous dependencies that are difficult to get installed on an HPC system. To address this issue, NERSC has created Shifter, a framework that delivers Docker-like functionality to HPC. It works by extracting images from native formats and converting them to a common format that is optimally tuned for the HPC environment. We have used Shifter to deliver the CVMFS software stack for ALICE, ATLAS, and STAR on the supercomputers at NERSC. As well as enabling the distribution multi-TB sized CVMFS stacks to HPC, this approach also offers performance advantages. Software startup times are significantly reduced and load times scale with minimal variation to 1000s of nodes. We profile several successful examples of scientists using Shifter to make scientific analysis easily customizable and scalable. We will describe the Shifter framework and several efforts in HEP and NP to use Shifter to deliver their software on the Cori HPC system.

  11. A Methodology to Assess the Capability of Engine Designs to Meet Closed-Loop Performance and Operability Requirements

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Csank, Jeffrey

    2015-01-01

    Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95 percent response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible controller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed-loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76 percent. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76 percent. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.

  12. A Methodology to Assess the Capability of Engine Designs to Meet Closed-loop Performance and Operability Requirements

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Csank, Jeffrey T.

    2015-01-01

    Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95% response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible con- troller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed- loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76%. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76%. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.

  13. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  14. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less

  15. Data Storage and Transfer | High-Performance Computing | NREL

    Science.gov Websites

    High-Performance Computing (HPC) systems. Photo of computer server wiring and lights, blurred to show data. WinSCP for Windows File Transfers Use to transfer files from a local computer to a remote computer. Robinhood for File Management Use this tool to manage your data files on Peregrine. Best

  16. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  17. UPLC-QTOFMS-based metabolomic analysis of the serum of hypoxic preconditioning mice

    PubMed Central

    Liu, Jie; Zhang, Gang; Chen, Dewei; Chen, Jian; Yuan, Zhi-Bin; Zhang, Er-Long; Gao, Yi-Xing; Xu, Gang; Sun, Bing-Da; Liao, Wenting; Gao, Yu-Qi

    2017-01-01

    Hypoxic preconditioning (HPC) is well-known to exert a protective effect against hypoxic injury; however, the underlying molecular mechanism remains unclear. The present study utilized a serum metabolomics approach to detect the alterations associated with HPC. In the present study, an animal model of HPC was established by exposing adult BALB/c mice to acute repetitive hypoxia four times. The serum samples were collected by orbital blood sampling. Metabolite profiling was performed using ultra-performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOFMS), in conjunction with univariate and multivariate statistical analyses. The results of the present study confirmed that the HPC mouse model was established and refined, suggesting significant differences between the control and HPC groups at the molecular levels. HPC caused significant metabolic alterations, as represented by the significant upregulation of valine, methionine, tyrosine, isoleucine, phenylalanine, lysophosphatidylcholine (LysoPC; 16:1), LysoPC (22:6), linoelaidylcarnitine, palmitoylcarnitine, octadecenoylcarnitine, taurine, arachidonic acid, linoleic acid, oleic acid and palmitic acid, and the downregulation of acetylcarnitine, malate, citrate and succinate. Using MetaboAnalyst 3.0, a number of key metabolic pathways were observed to be acutely perturbed, including valine, leucine and isoleucine biosynthesis, in addition to taurine, hypotaurine, phenylalanine, linoleic acid and arachidonic acid metabolism. The results of the present study provided novel insights into the mechanisms involved in the acclimatization of organisms to hypoxia, and demonstrated the protective mechanism of HPC. PMID:28901489

  18. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  19. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final reportmore » summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.« less

  20. Development of high-performance concrete mixtures for durable bridge decks in Montana using locally available materials.

    DOT National Transportation Integrated Search

    2005-03-01

    "The Montana Department of Transportation (MDT) is performing research to develop a cost-effective, indigenous highperformance : concrete (HPC) for use in bridge deck applications. The investigation was divided into two tasks: 1) : identification of ...

  1. Development of Mix Design Method in Efforts to Increase Concrete Performance Using Portland Pozzolana Cement (PPC)

    NASA Astrophysics Data System (ADS)

    Krisnamurti; Soehardjono, A.; Zacoeb, A.; Wibowo, A.

    2018-01-01

    Earthquake disaster can cause infrastructure damage. Prevention of human casualties from disasters should do. Prevention efforts can do through improving the mechanical performance of building materials. To achieve high-performance concrete (HPC), usually used Ordinary Portland Cement (OPC). However, the most widely circulating cement types today are Portland Pozzolana Cement (PPC) or Portland Composite Cement (PCC). Therefore, the proportion of materials used in the HPC mix design needs to adjust to achieve the expected performance. This study aims to develop a concrete mix design method using PPC to fulfil the criteria of HPC. The study refers to the code/regulation of concrete mixtures that use OPC based on the results of laboratory testing. This research uses PPC material, gravel from Malang area, Lumajang sand, water, silica fume and superplasticizer of a polycarboxylate copolymer. The analyzed information includes the investigation results of aggregate properties, concrete mixed composition, water-binder ratio variation, specimen dimension, compressive strength and elasticity modulus of the specimen. The test results show that the concrete compressive strength achieves value between 25 MPa to 55 MPa. The mix design method that has developed can simplify the process of concrete mix design using PPC to achieve the certain desired performance of concrete.

  2. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  3. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  4. Intracranial meningeal hemangiopericytoma: Recurrences at the initial and distant intracranial sites and extraneural metastases to multiple organs

    PubMed Central

    WEI, GUANGQUAN; KANG, XIAOWEI; LIU, XIANPING; TANG, XING; LI, QINLONG; HAN, JUNTAO; YIN, HONG

    2015-01-01

    Regardless of the controversial pathogenesis, intracranial meningeal hemangiopericytoma (M-HPC) is a rare, highly cellular and vascularized mesenchymal tumor that is characterized by a high tendency for recurrence and extraneural metastasis, despite radical excision and postoperative radiotherapy. M-HPC shares similar clinical manifestations and radiological findings with meningioma, which causes difficulty in differentiation of this entity from those prognostically favorable mimics prior to surgery. Treatment of M-HPC, particularly in metastatic settings, remains a challenge. A case is described of primary M-HPC with recurrence at the initial and distant intracranial sites and extraneural multiple-organ metastases in a 36-year-old female. The metastasis of M-HPC was extremely extensive, and to the best of our knowledge this is the first case of M-HPC with delayed metastasis to the bilateral kidneys. The data suggests that preoperative computed tomography and magnetic resonance imaging could provide certain diagnostic clues and useful information for more optimal treatment planning. The results may imply that novel drugs, such as temozolomide and bevacizumab, as a component of multimodality therapy of M-HPC may deserve further investigation. PMID:26171177

  5. Experimental evaluation of a flexible I/O architecture for accelerating workflow engines in ultrascale environments

    DOE PAGES

    Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...

    2016-10-06

    The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less

  6. ATLAS computing on CSCS HPC

    NASA Astrophysics Data System (ADS)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  7. An efficient framework for Java data processing systems in HPC environments

    NASA Astrophysics Data System (ADS)

    Fries, Aidan; Castañeda, Javier; Isasi, Yago; Taboada, Guillermo L.; Portell de Mora, Jordi; Sirvent, Raül

    2011-11-01

    Java is a commonly used programming language, although its use in High Performance Computing (HPC) remains relatively low. One of the reasons is a lack of libraries offering specific HPC functions to Java applications. In this paper we present a Java-based framework, called DpcbTools, designed to provide a set of functions that fill this gap. It includes a set of efficient data communication functions based on message-passing, thus providing, when a low latency network such as Myrinet is available, higher throughputs and lower latencies than standard solutions used by Java. DpcbTools also includes routines for the launching, monitoring and management of Java applications on several computing nodes by making use of JMX to communicate with remote Java VMs. The Gaia Data Processing and Analysis Consortium (DPAC) is a real case where scientific data from the ESA Gaia astrometric satellite will be entirely processed using Java. In this paper we describe the main elements of DPAC and its usage of the DpcbTools framework. We also assess the usefulness and performance of DpcbTools through its performance evaluation and the analysis of its impact on some DPAC systems deployed in the MareNostrum supercomputer (Barcelona Supercomputing Center).

  8. Circadian time-place (or time-route) learning in rats with hippocampal lesions.

    PubMed

    Cole, Emily; Mistlberger, Ralph E; Merza, Devon; Trigiani, Lianne J; Madularu, Dan; Simundic, Amanda; Mumby, Dave G

    2016-12-01

    Circadian time-place learning (TPL) is the ability to remember both the place and biological time of day that a significant event occurred (e.g., food availability). This ability requires that a circadian clock provide phase information (a time tag) to cognitive systems involved in linking representations of an event with spatial reference memory. To date, it is unclear which neuronal substrates are critical in this process, but one candidate structure is the hippocampus (HPC). The HPC is essential for normal performance on tasks that require allocentric spatial memory and exhibits circadian rhythms of gene expression that are sensitive to meal timing. Using a novel TPL training procedure and enriched, multidimensional environment, we trained rats to locate a food reward that varied between two locations relative to time of day. After rats acquired the task, they received either HPC or SHAM lesions and were re-tested. Rats with HPC lesions were initially impaired on the task relative to SHAM rats, but re-attained high scores with continued testing. Probe tests revealed that the rats were not using an alternation strategy or relying on light-dark transitions to locate the food reward. We hypothesize that transient disruption and recovery reflect a switch from HPC-dependent allocentric navigation (learning places) to dorsal striatum-dependent egocentric spatial navigation (learning routes to a location). Whatever the navigation strategy, these results demonstrate that the HPC is not required for rats to find food in different locations using circadian phase as a discriminative cue. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Rats with ventral hippocampal damage are impaired at various forms of learning including conditioned inhibition, spatial navigation, and discriminative fear conditioning to similar contexts.

    PubMed

    McDonald, Robert J; Balog, R J; Lee, Justin Q; Stuart, Emily E; Carrels, Brianna B; Hong, Nancy S

    2018-10-01

    The ventral hippocampus (vHPC) has been implicated in learning and memory functions that seem to differ from its dorsal counterpart. The goal of this series of experiments was to provide further insight into the functional contributions of the vHPC. Our previous work implicated the vHPC in spatial learning, inhibitory learning, and fear conditioning to context. However, the specific role of vHPC on these different forms of learning are not clear. Accordingly, we assessed the effects of neurotoxic lesions of the ventral hippocampus on retention of a conditioned inhibitory association, early versus late spatial navigation in the water task, and discriminative fear conditioning to context under high ambiguity conditions. The results showed that the vHPC was necessary for the expression of conditioned inhibition, early spatial learning, and discriminative fear conditioning to context when the paired and unpaired contexts have high cue overlap. We argue that this pattern of effects, combined with previous work, suggests a key role for vHPC in the utilization of broad contextual representations for inhibition and discriminative memory in high ambiguity conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. PuTTY | High-Performance Computing | NREL

    Science.gov Websites

    PuTTY PuTTY Learn how to use PuTTY to connect to NREL's high-performance computing (HPC) systems . Connecting When you start the PuTTY app, the program will display PuTTY's Configuration menu. When this comes . When prompted, type your password again followed by . Note: to increase

  11. Examining Students' Use of Online Annotation Tools in Support of Argumentative Reading

    ERIC Educational Resources Information Center

    Lu, Jingyan; Deng, Liping

    2013-01-01

    This study examined how students in a Hong Kong high school used Diigo, an online annotation tool, to support their argumentative reading activities. Two year 10 classes, a high-performance class (HPC) and an ordinary-performance class (OPC), highlighted passages of text and wrote and attached sticky notes to them to clarify argumentation…

  12. Evolution of Embedded Processing for Wide Area Surveillance

    DTIC Science & Technology

    2014-01-01

    future vision . 15. SUBJECT TERMS Embedded processing; high performance computing; general-purpose graphical processing units (GPGPUs) 16. SECURITY...recon- naissance (ISR) mission capabilities. The capabilities these advancements are achieving include the ability to provide persistent all...fighters to support and positively affect their mission . Significant improvements in high-performance computing (HPC) technology make it possible to

  13. Swelling of Superabsorbent Poly(Sodium-Acrylate Acrylamide) Hydrogels and Influence of Chemical Structure on Internally Cured Mortar

    NASA Astrophysics Data System (ADS)

    Krafcik, Matthew J.; Erk, Kendra A.

    Superabsorbent hydrogel particles show promise as internal curing agents for high performance concrete (HPC). These gels can absorb and release large volumes of water and offer a solution to the problem of self-dessication in HPC. However, the gels are sensitive to ions naturally present in concrete. This research connects swelling behavior with gel-ion interactions to optimize hydrogel performance for internal curing, reducing the chance of early-age cracking and increasing the durability of HPC. Four different hydrogels of poly(sodium-acrylate acrylamide) are synthesized and characterized with swelling tests in different salt solutions. Depending on solution pH, ionic character, and gel composition, diffrerent swelling behaviors are observed. As weight percent of acrylic acid increases, gels demonstrate higher swelling ratios in reverse osmosis water, but showed substantially decreased swelling when aqueous cations are present. Additionally, in multivalent cation solutions, overshoot peaks are present, whereby the gels have a peak swelling ratio but then deswell. Multivalent cations interact with deprotonated carboxylic acid groups, constricting the gel and expelling water. Mortar containing hydrogels showed reduced autogenous shrinkage and increased relative humidity.

  14. VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures

    DOE PAGES

    Moreland, Kenneth; Sewell, Christopher; Usher, William; ...

    2016-05-09

    Here, one of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  15. VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures

    DOE PAGES

    Moreland, Kenneth; Sewell, Christopher; Usher, William; ...

    2016-05-09

    Execution on massively threaded processors is one of the most critical challenges for high-performance computing (HPC) scientific visualization. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Moreover, our current production scientific visualization software is not designed for these new types of architectures. In order to address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  16. High-performance concrete : applying life-cycle cost analysis and developing specifications.

    DOT National Transportation Integrated Search

    2016-12-01

    Numerous studies and transportation agency experience across the nation have established that highperformance concrete (HPC) technology improves concrete quality and extends the service life of concrete structures at risk of chlorideinduced cor...

  17. P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)

    PubMed Central

    Pillardy, J.

    2007-01-01

    One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.

  18. Atlas : A library for numerical weather prediction and climate modelling

    NASA Astrophysics Data System (ADS)

    Deconinck, Willem; Bauer, Peter; Diamantakis, Michail; Hamrud, Mats; Kühnlein, Christian; Maciel, Pedro; Mengaldo, Gianmarco; Quintino, Tiago; Raoult, Baudouin; Smolarkiewicz, Piotr K.; Wedi, Nils P.

    2017-11-01

    The algorithms underlying numerical weather prediction (NWP) and climate models that have been developed in the past few decades face an increasing challenge caused by the paradigm shift imposed by hardware vendors towards more energy-efficient devices. In order to provide a sustainable path to exascale High Performance Computing (HPC), applications become increasingly restricted by energy consumption. As a result, the emerging diverse and complex hardware solutions have a large impact on the programming models traditionally used in NWP software, triggering a rethink of design choices for future massively parallel software frameworks. In this paper, we present Atlas, a new software library that is currently being developed at the European Centre for Medium-Range Weather Forecasts (ECMWF), with the scope of handling data structures required for NWP applications in a flexible and massively parallel way. Atlas provides a versatile framework for the future development of efficient NWP and climate applications on emerging HPC architectures. The applications range from full Earth system models, to specific tools required for post-processing weather forecast products. The Atlas library thus constitutes a step towards affordable exascale high-performance simulations by providing the necessary abstractions that facilitate the application in heterogeneous HPC environments by promoting the co-design of NWP algorithms with the underlying hardware.

  19. System-Level Virtualization for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less

  20. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  1. Time-efficient simulations of tight-binding electronic structures with Intel Xeon PhiTM many-core processors

    NASA Astrophysics Data System (ADS)

    Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam

    2016-12-01

    Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.

  2. DCL System Using Deep Learning Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    platform (HPC) was developed, called the HPC-Acoustic Data Accelerator, or HPC-ADA for short. The HPC-ADA was designed based on fielded systems [1-4...software (Detection cLassificaiton for MAchine learning - High Peformance Computing). The software package was designed to utilize parallel and...Sedna [7] and is designed using a parallel architecture2, allowing existing algorithms to distribute to the various processing nodes with minimal changes

  3. Data Provenance Hybridization Supporting Extreme-Scale Scientific WorkflowApplications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elsethagen, Todd O.; Stephan, Eric G.; Raju, Bibi

    As high performance computing (HPC) infrastructures continue to grow in capability and complexity, so do the applications that they serve. HPC and distributed-area computing (DAC) (e.g. grid and cloud) users are looking increasingly toward workflow solutions to orchestrate their complex application coupling, pre- and post-processing needs To gain insight and a more quantitative understanding of a workflow’s performance our method includes not only the capture of traditional provenance information, but also the capture and integration of system environment metrics helping to give context and explanation for a workflow’s execution. In this paper, we describe IPPD’s provenance management solution (ProvEn) andmore » its hybrid data store combining both of these data provenance perspectives.« less

  4. Extreme I/O on HPC for HEP using the Burst Buffer at NERSC

    NASA Astrophysics Data System (ADS)

    Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.

  5. Vapor deposition polymerization of aniline on 3D hierarchical porous carbon with enhanced cycling stability as supercapacitor electrode

    NASA Astrophysics Data System (ADS)

    Zhao, Yufeng; Zhang, Zhi; Ren, Yuqin; Ran, Wei; Chen, Xinqi; Wu, Jinsong; Gao, Faming

    2015-07-01

    In this work, a polyaniline coated hierarchical porous carbon (HPC) composite (PANI@HPC) is developed using a vapor deposition polymerization technique. The as synthesized composite is applied as the supercapacitor electrode material, and presents a high specific capacitance of 531 F g-1 at current density of 0.5 A g-1 and superior cycling stability of 96.1% (after 10,000 charge-discharge cycles at current density of 10 A g-1). This can be attributed to the maximized synergistic effect of PANI and HPC. Furthermore, an aqueous symmetric supercapacitor device based on PANI@HPC is fabricated, demonstrating a high specific energy of 17.3 Wh kg-1.

  6. Providing a parallel and distributed capability for JMASS using SPEEDES

    NASA Astrophysics Data System (ADS)

    Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob

    2002-07-01

    The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    East, D. R.; Sexton, J.

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and IBM TJ Watson Research Center to research, assess feasibility and develop an implementation plan for a High Performance Computing Innovation Center (HPCIC) in the Livermore Valley Open Campus (LVOC). The ultimate goal of this work was to help advance the State of California and U.S. commercial competitiveness in the arena of High Performance Computing (HPC) by accelerating the adoption of computational science solutions, consistent with recent DOE strategy directives. The desired result of this CRADA was a well-researched,more » carefully analyzed market evaluation that would identify those firms in core sectors of the US economy seeking to adopt or expand their use of HPC to become more competitive globally, and to define how those firms could be helped by the HPCIC with IBM as an integral partner.« less

  8. In Situ Methods, Infrastructures, and Applications on High Performance Computing Platforms, a State-of-the-art (STAR) Report

    DOE PAGES

    Bethel, EW; Bauer, A; Abbasi, H; ...

    2016-06-10

    The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i.e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed /visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU’s and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitionersmore » using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.« less

  9. Secure Enclaves: An Isolation-centric Approach for Creating Secure High Performance Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.« less

  10. [Biological behavior of hypopharyngeal carcinoma].

    PubMed

    Zhou, L X

    1997-01-01

    Hypopharyngeal squamous cell carcinomas (HPC) has an extremely poor prognosis. Characteristics of cell lines of head and neck squamous cell carcinomas including HPC were studied by various methods, e.g., chemosensitivity test and the immunohistochemistry staining method, to determine whether this poor prognosis is due to the biological behavior of this cancer. An HPC cell line was found to be resistant to anti tumor drugs, i.e., PEP, MTX and CPM and moderately sensitive to CDDP, 5-FU and ADM. Thermoresistance to hyperthermatic treatment and weak expression of ICAM-1 on the HPC cell line were observed. DNA synthesis by the HPC cell line was induced by stimulation with a low concentration of EGF and the amount of EGFR on these HPC cells was very high. In addition, cyclinD1 overexpression was found in the HPC cell line. Based on the above findings, further analysis of hypopharyngeal carcinoma cells and the development of a new treatment modality to control tumor growth and metastatic factors influencing the poor outcome are necessary to improve the prognosis of this cancer.

  11. Recurrent extradural hemangiopericytoma of thoracic spine: a case report.

    PubMed

    Jayashankar, Erukkambattu; Prabhala, Shailaja; Raju, Subodh; Tanikella, Ramamurti

    2014-01-01

    Hemangiopericytoma (HPC) is a rare tumor that arises from pericapillary cells or pericytes of Zimmerman. In the central nervous system, it accounts for less than 1% of tumors, and spinal involvement is very rare. Meningeal hemangiopericytomas show morphological similarities with meningiomas particularly with angiomatous meningioma, where one needs to take the help of immunohistochemistry (IHC) to delineate HPC from meningioma. Here, we report a case of recurrent extradural HPC in a 16 year-old girl, who 5 years back had a pathological diagnosis of angiomatous meningioma, for D5-D6 lesion. On evaluation, magnetic resonance imaging (MRI) showed a large extradural tumor with a significant cord compression involving D5-D6 body, pedicle and ribs. Excision of the lesion and spinal stabilization was performed. The histopathological examination and immunohistochemistry performed on tumor sections revealed features favoring HPC. To conclude, detailed IHC is helpful in avoiding misdiagnosis and in further management of the patient.

  12. Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Lauer, Frank

    This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less

  13. Allocation Usage Tracking and Management | High-Performance Computing |

    Science.gov Websites

    NREL's high-performance computing (HPC) systems, learn how to track and manage your allocations. The alloc_tracker script (/usr/local/bin/alloc_tracker) may be used to see what allocations you have access to, how much of the allocation has been used, how much remains and how many node hours will be forfeited at the

  14. High-Performance Computing Act of 1990: Report of the Senate Committee on Commerce, Science, and Transportation on S. 1067.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.

    This committee report is intended to accompany S. 1067, a bill designed to provide for a coordinated federal research program in high-performance computing (HPC). The primary objective of the legislation is given as the acceleration of research, development, and application of the most advanced computing technology in research, education, and…

  15. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  16. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  17. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  18. HPC USER WORKSHOP - JUNE 12TH | High-Performance Computing | NREL

    Science.gov Websites

    to CentOS 7, changes to modules management, Singularity and containers on Peregrine, and using of changes, with the remaining two hours dedicated to demos and one-on-one interaction as needed

  19. Development of a device to evaluate the cracking potential of concrete mixtures.

    DOT National Transportation Integrated Search

    2011-08-01

    Developments in material technology during past decades, including the introduction of a wide range of : concrete mixtures, ingredients, and combinations, led to the development of high-performance concrete : (HPC). However, despite advances in techn...

  20. Implementing HPC on the Sunshine Bridge project

    DOT National Transportation Integrated Search

    2007-11-01

    This report presents the research work from a pilot program regarding the feasibility of implementing high performance concrete on Arizona bridge decks, using the Sunshine Bridge in Holbrook, Arizona as a test case. An existing concrete slab was remo...

  1. Abrasion-resistant concrete mix designs for precast bridge deck panels.

    DOT National Transportation Integrated Search

    2010-08-01

    The report documents laboratory investigations undertaken to develop high performance concrete (HPC) for precast and pre-stressed bridge deck components that would reduce the life-cycle cost of bridges by improving the studded tire wear (abrasion) re...

  2. Implementing HPC on the Sunshine Bridge Project

    DOT National Transportation Integrated Search

    2007-11-16

    This report presents the research work from a pilot program regarding the feasibility of implementing high performance concrete on Arizona bridge decks, using the Sunshine Bridge in Holbrook, Arizona as a test case. An existing concrete slab was remo...

  3. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  4. 77 FR 30371 - Airworthiness Directives; International Aero Engines AG Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-23

    ... (USIs) of certain high-pressure compressor (HPC) stage 3 to 8 drums, and replacement of drum attachment... Condition This AD results from reports of 50 additional high-pressure compressor (HPC) stage 3 to 8 drums...

  5. Exploiting Parallel R in the Cloud with SPRINT

    PubMed Central

    Piotrowski, M.; McGilvary, G.A.; Sloan, T. M.; Mewissen, M.; Lloyd, A.D.; Forster, T.; Mitchell, L.; Ghazal, P.; Hill, J.

    2012-01-01

    Background Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Objectives Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon’s Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. Methods The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. Results It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of algorithm. Resource underutilization can further improve the time to result. End-user’s location impacts on costs due to factors such as local taxation. Conclusions: Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds. PMID:23223611

  6. Exploiting parallel R in the cloud with SPRINT.

    PubMed

    Piotrowski, M; McGilvary, G A; Sloan, T M; Mewissen, M; Lloyd, A D; Forster, T; Mitchell, L; Ghazal, P; Hill, J

    2013-01-01

    Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon's Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of the algorithm. Resource underutilization can further improve the time to result. End-user's location impacts on costs due to factors such as local taxation. Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds.

  7. Influence of temperature and relative humidity conditions on the pan coating of hydroxypropyl cellulose molded capsules.

    PubMed

    Macchi, Elena; Zema, Lucia; Pandey, Preetanshu; Gazzaniga, Andrea; Felton, Linda A

    2016-03-01

    In a previous study, hydroxypropyl cellulose (HPC)-based capsular shells prepared by injection molding and intended for pulsatile release were successfully coated with 10mg/cm(2) Eudragit® L film. The suitability of HPC capsules for the development of a colon delivery platform based on a time dependent approach was demonstrated. In the present work, data logging devices (PyroButton®) were used to monitor the microenvironmental conditions, i.e. temperature (T) and relative humidity (RH), during coating processes performed under different spray rates (1.2, 2.5 and 5.5g/min). As HPC-based capsules present special features, a preliminary study was conducted on commercially available gelatin capsules for comparison purposes. By means of PyroButton data-loggers it was possible to acquire information about the impact of the effective T and RH conditions experienced by HPC substrates during the process on the technological properties and release performance of the coated systems. The use of increasing spray rates seemed to promote a tendency of the HPC shells to slightly swell at the beginning of the spraying process; moreover, capsules coated under spray rates of 1.2 and 2.5g/min showed the desired release performance, i.e. ability to withstand the acidic media followed by the pulsatile release expected for uncoated capsules. Preliminary stability studies seemed to show that coating conditions might also influence the release performance of the system upon storage. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Assessment of Material Solutions of Multi-level Garage Structure Within Integrated Life Cycle Design Process

    NASA Astrophysics Data System (ADS)

    Wałach, Daniel; Sagan, Joanna; Gicala, Magdalena

    2017-10-01

    The paper presents an environmental and economic analysis of the material solutions of multi-level garage. The construction project approach considered reinforced concrete structure under conditions of use of ordinary concrete and high-performance concrete (HPC). Using of HPC allowed to significant reduction of reinforcement steel, mainly in compression elements (columns) in the construction of the object. The analysis includes elements of the methodology of integrated lice cycle design (ILCD). By making multi-criteria analysis based on established weight of the economic and environmental parameters, three solutions have been evaluated and compared within phase of material production (information modules A1-A3).

  9. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  10. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen

    2016-01-18

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  11. Investigating the Interplay between Energy Efficiency and Resilience in High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Song, Shuaiwen; Wu, Panruo

    2015-05-29

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  12. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon

    2015-11-16

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  13. A review of transfusion practice before, during, and after hematopoietic progenitor cell transplantation

    PubMed Central

    Johnson, Viviana V.; Sandler, S. Gerald; Sayegh, Antoine; Klumpp, Thomas R.

    2008-01-01

    The increased use of hematopoietic progenitor cell (HPC) transplantation has implications and consequences for transfusion services: not only in hospitals where HPC transplantations are performed, but also in hospitals that do not perform HPC transplantations but manage patients before or after transplantation. Candidates for HPC transplantation have specific and specialized transfusion requirements before, during, and after transplantation that are necessary to avert the adverse consequences of alloimmunization to human leukocyte antigens, immunohematologic consequences of ABO-mismatched transplantations, or immunosuppression. Decisions concerning blood transfusions during any of these times may compromise the outcome of an otherwise successful transplantation. Years after an HPC transplantation, and even during clinical remission, recipients may continue to be immunosuppressed and may have critically important, special transfusion requirements. Without a thorough understanding of these special requirements, provision of compatible blood components may be delayed and often urgent transfusion needs prohibit appropriate consultation with the patient's transplantation specialist. To optimize the relevance of issues and communication between clinical hematologists, transplantation physicians, and transfusion medicine physicians, the data and opinions presented in this review are organized by sequence of patient presentation, namely, before, during, and after transplantation. PMID:18583566

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in reconfigurable network enclaving through Software Defined Networking (SDN) and Network Function Virtualization (NFV) and their applicability to secure enclaves in HPC environments. SDN and NFV methods are based on a solid foundation of system wide virtualization. The purpose of which is very straight forward, the system administrator can deploy networks that are more amenable to customer needs, and at the same time achieve increased scalability making it easier to increase overall capacity as needed without negatively affecting functionality. The network administration of both the server system and the virtual sub-systems is simplified allowing control of the infrastructure through well-defined APIs (Application Programming Interface). While SDN and NFV technologies offer significant promise in meeting these goals, they also provide the ability to address a significant component of the multi-tenant challenge in HPC environments, namely resource isolation. Traditional HPC systems are built upon scalable high-performance networking technologies designed to meet specific application requirements. Dynamic isolation of resources within these environments has remained difficult to achieve. SDN and NFV methodology provide us with relevant concepts and available open standards based APIs that isolate compute and storage resources within an otherwise common networking infrastructure. Additionally, the integration of the networking APIs within larger system frameworks such as OpenStack provide the tools necessary to establish isolated enclaves dynamically allowing the benefits of HPC while providing a controlled security structure surrounding these systems.« less

  15. High-capacity adsorption of Cr(VI) from aqueous solution using a hierarchical porous carbon obtained from pig bone.

    PubMed

    Wei, Shaochen; Li, Dongtian; Huang, Zhe; Huang, Yaqin; Wang, Feng

    2013-04-01

    A hierarchical porous carbon obtained from pig bone (HPC) was utilized as the adsorbent for removal of Cr(VI) from aqueous solution. The effects of solution pH value, concentration of Cr(VI), and adsorption temperature on the removal of Cr(VI) were investigated. The experimental data of the HPC fitted well with the Langmuir isotherm and its adsorption kinetic followed pseudo-second order model. Compared with a commercial activated carbon adsorbent (Norit CGP), the HPC showed an high adsorption capability for Cr(VI). The maximum Cr(VI) adsorption capacity of the HPC was 398.40 mg/g at pH 2. It is found that a part of the Cr(VI) was reduced to Cr(III) on the adsorbent surface from desorption experiment data. The regeneration showed adsorption capacity of the HPC can still achieve 92.70 mg/g even after fifth adsorption cycle. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. The clinical phenotype of hereditary versus sporadic prostate cancer: HPC definition revisited.

    PubMed

    Cremers, Ruben G; Aben, Katja K; van Oort, Inge M; Sedelaar, J P Michiel; Vasen, Hans F; Vermeulen, Sita H; Kiemeney, Lambertus A

    2016-07-01

    The definition of hereditary prostate cancer (HPC) is based on family history and age at onset. Intuitively, HPC is a serious subtype of prostate cancer but there are only limited data on the clinical phenotype of HPC. Here, we aimed to compare the prognosis of HPC to the sporadic form of prostate cancer (SPC). HPC patients were identified through a national registry of HPC families in the Netherlands, selecting patients diagnosed from the year 2000 onward (n = 324). SPC patients were identified from the Netherlands Cancer Registry (NCR) between 2003 and 2006 for a population-based study into the genetic susceptibility of PC (n = 1,664). Detailed clinical data were collected by NCR-registrars, using a standardized registration form. Follow-up extended up to the end of 2013. Differences between the groups were evaluated by cross-tabulations and tested for statistical significance while accounting for familial dependency of observations by GEE. Differences in progression-free and overall survival were evaluated using χ(2) testing with GEE in a proportional-hazards model. HPC patients were on average 3 years younger at diagnosis, had lower PSA values, lower Gleason scores, and more often locally confined disease. Of the HPC patients, 35% had high-risk disease (NICE-criteria) versus 51% of the SPC patients. HPC patients were less often treated with active surveillance. Kaplan-Meier 5-year progression-free survival after radical prostatectomy was comparable for HPC (78%) and SPC (74%; P = 0.30). The 5-year overall survival was 85% (95%CI 81-89%) for HPC versus 80% (95%CI 78-82%) for SPC (P = 0.03). HPC has a favorable clinical phenotype but patients more often underwent radical treatment. The major limitation of HPC is the absence of a genetics-based definition of HPC, which may lead to over-diagnosis of PC in men with a family history of prostate cancer. The HPC definition should, therefore, be re-evaluated, aiming at a reduction of over-diagnosis and overtreatment among men with multiple relatives diagnosed with PC. Prostate 76:897-904, 2016. © 2016 The Authors. The Prostate published by Wiley Periodicals, Inc. © 2016 The Authors. The Prostate published by Wiley Periodicals, Inc.

  17. ALDH1 is an immunohistochemical diagnostic marker for solitary fibrous tumours and haemangiopericytomas of the meninges emerging from gene profiling study

    PubMed Central

    2013-01-01

    Background Solitary Fibrous Tumours (SFT) and haemangiopericytomas (HPC) are rare meningeal tumours that have to be distinguished from meningiomas and more rarely from synovial sarcomas. We recently found that ALDH1A1 was overexpressed in SFT and HPC as compared to soft tissue sarcomas. Using whole-genome DNA microarrays, we defined the gene expression profiles of 16 SFT/HPC (9 HPC and 7 SFT). Expression profiles were compared to publicly available expression profiles of additional SFT or HPC, meningiomas and synovial sarcomas. We also performed an immunohistochemical (IHC) study with anti-ALDH1 and anti-CD34 antibodies on Tissue Micro-Arrays including 38 SFT (25 meningeal and 13 extrameningeal), 55 meningeal haemangiopericytomas (24 grade II, 31 grade III), 163 meningiomas (86 grade I, 62 grade II, 15 grade III) and 98 genetically confirmed synovial sarcomas. Results ALDH1A1 gene was overexpressed in SFT/HPC, as compared to meningiomas and synovial sarcomas. These findings were confirmed at the protein level. 84% of the SFT and 85.4% of the HPC were positive with anti-ALDH1 antibody, while only 7.1% of synovial sarcomas and 1.2% of meningiomas showed consistent expression. Positivity was usually more diffuse in SFT/HPC compared to other tumours with more than 50% of tumour cells immunostained in 32% of SFT and 50.8% of HPC. ALDH1 was a sensitive and specific marker for the diagnosis of SFT (SE = 84%, SP = 98.8%) and HPC (SE = 84.5%, SP = 98.7%) of the meninges. In association with CD34, ALDH1 expression had a specificity and positive predictive value of 100%. Conclusion We show that ALDH1, a stem cell marker, is an accurate diagnostic marker for SFT and HPC, which improves the diagnostic value of CD34. ALDH1 could also be a new therapeutic target for these tumours which are not sensitive to conventional chemotherapy. PMID:24252471

  18. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov Websites

    and password will match your NREL network account login/password. From OS X or Linux, open a terminal finalized. Open a Remote Desktop connection using server name WINHPC02 (this is the login node). Mac Mac

  19. Structural monitoring of Rigolets Pass Bridge : LTRC technical summary report 437.

    DOT National Transportation Integrated Search

    2009-09-01

    The Louisiana Department of Transportation and Development (LADOTD) has been gradually : introducing high performance concrete (HPC) into their bridge construction programs. The : Rigolets Pass Bridge is a 62-span bridge with a total length of 5,489 ...

  20. Fatigue and shear behavior of HPC bulb tee girders : LTRC technical summary report.

    DOT National Transportation Integrated Search

    2008-04-01

    The objectives of the research were (1) to provide assurance that full size, deep prestressed concrete girders made with HPC would perform satisfactorily under flexural fatigue, static shear, and static flexural loading conditions; (2) to determine i...

  1. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  2. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.

  3. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  4. Assessment of current cybersecurity practices in the public domain : cyber indications and warnings domain.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Jason R.; Keliiaa, Curtis M.

    This report assesses current public domain cyber security practices with respect to cyber indications and warnings. It describes cybersecurity industry and government activities, including cybersecurity tools, methods, practices, and international and government-wide initiatives known to be impacting current practice. Of particular note are the U.S. Government's Trusted Internet Connection (TIC) and 'Einstein' programs, which are serving to consolidate the Government's internet access points and to provide some capability to monitor and mitigate cyber attacks. Next, this report catalogs activities undertaken by various industry and government entities. In addition, it assesses the benchmarks of HPC capability and other HPC attributes thatmore » may lend themselves to assist in the solution of this problem. This report draws few conclusions, as it is intended to assess current practice in preparation for future work, however, no explicit references to HPC usage for the purpose of analyzing cyber infrastructure in near-real-time were found in the current practice. This report and a related SAND2010-4766 National Cyber Defense High Performance Computing and Analysis: Concepts, Planning and Roadmap report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.« less

  5. Low-viscosity hydroxypropylcellulose (HPC) grades SL and SSL: versatile pharmaceutical polymers for dissolution enhancement, controlled release, and pharmaceutical processing.

    PubMed

    Sarode, Ashish; Wang, Peng; Cote, Catherine; Worthen, David R

    2013-03-01

    Hydroxypropylcellulose (HPC)-SL and -SSL, low-viscosity hydroxypropylcellulose polymers, are versatile pharmaceutical excipients. The utility of HPC polymers was assessed for both dissolution enhancement and sustained release of pharmaceutical drugs using various processing techniques. The BCS class II drugs carbamazepine (CBZ), hydrochlorthiazide, and phenytoin (PHT) were hot melt mixed (HMM) with various polymers. PHT formulations produced by solvent evaporation (SE) and ball milling (BM) were prepared using HPC-SSL. HMM formulations of BCS class I chlorpheniramine maleate (CPM) were prepared using HPC-SL and -SSL. These solid dispersions (SDs) manufactured using different processes were evaluated for amorphous transformation and dissolution characteristics. Drug degradation because of HMM processing was also assessed. Amorphous conversion using HMM could be achieved only for relatively low-melting CBZ and CPM. SE and BM did not produce amorphous SDs of PHT using HPC-SSL. Chemical stability of all the drugs was maintained using HPC during the HMM process. Dissolution enhancement was observed in HPC-based HMMs and compared well to other polymers. The dissolution enhancement of PHT was in the order of SE>BM>HMM>physical mixtures, as compared to the pure drug, perhaps due to more intimate mixing that occurred during SE and BM than in HMM. Dissolution of CPM could be significantly sustained in simulated gastric and intestinal fluids using HPC polymers. These studies revealed that low-viscosity HPC-SL and -SSL can be employed to produce chemically stable SDs of poorly as well as highly water-soluble drugs using various pharmaceutical processes in order to control drug dissolution.

  6. 75 FR 67253 - Airworthiness Directives; Pratt & Whitney (PW) Models PW4074 and PW4077 Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-02

    ... high-pressure compressor (HPC) disks, part number (P/N) 55H615, installed. This proposed AD would... & Whitney (PW) PW4074 and PW4077 turbofan engines with 15th stage high-pressure compressor (HPC) disks, part...

  7. 77 FR 10952 - Airworthiness Directives; CFM International S.A. Model CFM56 Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    .../N) and serial number (SN) high-pressure compressor (HPC) 4-9 spools installed. In Table 1 of the AD, the HPC 4-9 spool SN GWN05AMO in the 2nd column of the Table is incorrect. This document corrects that...), currently requires removing certain HPC 4-9 spools listed by P/N and SN in the AD. As published, in Table 1...

  8. The impact of α-Lipoic acid on cell viability and expression of nephrin and ZNF580 in normal human podocytes.

    PubMed

    Leppert, Ulrike; Gillespie, Allan; Orphal, Miriam; Böhme, Karen; Plum, Claudia; Nagorsen, Kaj; Berkholz, Janine; Kreutz, Reinhold; Eisenreich, Andreas

    2017-09-05

    Human podocytes (hPC) are essential for maintaining normal kidney function and dysfunction or loss of hPC play a pivotal role in the manifestation and progression of chronic kidney diseases including diabetic nephropathy. Previously, α-Lipoic acid (α-LA), a licensed drug for treatment of diabetic neuropathy, was shown to exhibit protective effects on diabetic nephropathy in vivo. However, the effect of α-LA on hPC under non-diabetic conditions is unknown. Therefore, we analyzed the impact of α-LA on cell viability and expression of nephrin and zinc finger protein 580 (ZNF580) in normal hPC in vitro. Protein analyses were done via Western blot techniques. Cell viability was determined using a functional assay. hPC viability was dynamically modulated via α-LA stimulation in a concentration-dependent manner. This was associated with reduced nephrin and ZNF580 expression and increased nephrin phosphorylation in normal hPC. Moreover, α-LA reduced nephrin and ZNF580 protein expression via 'kappa-light-chain-enhancer' of activated B-cells (NF-κB) inhibition. These data demonstrate that low α-LA had no negative influence on hPC viability, whereas, high α-LA concentrations induced cytotoxic effects on normal hPC and reduced nephrin and ZNF580 expression via NF-κB inhibition. These data provide first novel information about potential cytotoxic effects of α-LA on hPC under non-diabetic conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    PubMed Central

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  10. New Challenges of the Computation of Multiple Sequence Alignments in the High-Throughput Era (2010 JGI/ANL HPC Workshop)

    ScienceCinema

    Notredame, Cedric

    2018-05-02

    Cedric Notredame from the Centre for Genomic Regulation gives a presentation on New Challenges of the Computation of Multiple Sequence Alignments in the High-Throughput Era at the JGI/Argonne HPC Workshop on January 26, 2010.

  11. Towards Cloud-based Asynchronous Elasticity for Iterative HPC Applications

    NASA Astrophysics Data System (ADS)

    da Rosa Righi, Rodrigo; Facco Rodrigues, Vinicius; André da Costa, Cristiano; Kreutz, Diego; Heiss, Hans-Ulrich

    2015-10-01

    Elasticity is one of the key features of cloud computing. It allows applications to dynamically scale computing and storage resources, avoiding over- and under-provisioning. In high performance computing (HPC), initiatives are normally modeled to handle bag-of-tasks or key-value applications through a load balancer and a loosely-coupled set of virtual machine (VM) instances. In the joint-field of Message Passing Interface (MPI) and tightly-coupled HPC applications, we observe the need of rewriting source codes, previous knowledge of the application and/or stop-reconfigure-and-go approaches to address cloud elasticity. Besides, there are problems related to how profit this new feature in the HPC scope, since in MPI 2.0 applications the programmers need to handle communicators by themselves, and a sudden consolidation of a VM, together with a process, can compromise the entire execution. To address these issues, we propose a PaaS-based elasticity model, named AutoElastic. It acts as a middleware that allows iterative HPC applications to take advantage of dynamic resource provisioning of cloud infrastructures without any major modification. AutoElastic provides a new concept denoted here as asynchronous elasticity, i.e., it provides a framework to allow applications to either increase or decrease their computing resources without blocking the current execution. The feasibility of AutoElastic is demonstrated through a prototype that runs a CPU-bound numerical integration application on top of the OpenNebula middleware. The results showed the saving of about 3 min at each scaling out operations, emphasizing the contribution of the new concept on contexts where seconds are precious.

  12. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  13. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  14. Climate Science Performance, Data and Productivity on Titan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Benjamin W; Worley, Patrick H; Gaddis, Abigail L

    2015-01-01

    Climate Science models are flagship codes for the largest of high performance computing (HPC) resources, both in visibility, with the newly launched Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) effort, and in terms of significant fractions of system usage. The performance of the DOE ACME model is captured with application level timers and examined through a sizeable run archive. Performance and variability of compute, queue time and ancillary services are examined. As Climate Science advances in the use of HPC resources there has been an increase in the required human and data systems to achieve programs goals.more » A description of current workflow processes (hardware, software, human) and planned automation of the workflow, along with historical and projected data in motion and at rest data usage, are detailed. The combination of these two topics motivates a description of future systems requirements for DOE Climate Modeling efforts, focusing on the growth of data storage and network and disk bandwidth required to handle data at an acceptable rate.« less

  15. A case of metastatic haemangiopericytoma to the thyroid gland: Case report and literature review

    PubMed Central

    PROIETTI, AGNESE; SARTORI, CHIARA; TORREGROSSA, LIBORIO; VITTI, PAOLO; AGHABABYAN, ALEKSANDR; FREGOLI, LORENZO; MICCOLI, PAOLO; BASOLO, FULVIO

    2012-01-01

    Haemangiopericytoma (HPC) is a mesenchymal neoplasm accounting for a minority of all vascular tumours. HPC mostly arises in the lower extremities and the retroperitoneum, while the head and neck area is the third most common site. The majority of HPCs are histologically benign. However, a small percentage possess atypical features, such as a high mitotic rate, high cellularity and foci of necrosis. We report a case of classical abdominal HPC that presented 7 years after the first surgical resection with thyroid metastases of malignant HPC. Microscopic examination revealed multiple hypercellular nodules with an infiltrative growth pattern. These nodules consisted of tightly packed fusiform or spindle-shaped cells with nuclear polymorphism and an increased mitotic rate. The tumour cells exhibited a marked expression of CD34. Cells were arranged around a prominent vascular network, occasionally with a ‘staghorn’ configuration. The results of this study support and confirm the theory that HPC is a rare neoplasm with unpredictable behaviour, as largely debated in the international literature. Therefore, this study emphasized the importance of applying strict diagnostic criteria in making the most appropriate diagnosis. PMID:22783428

  16. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  17. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Transport Protocol (Transmission Control Protocol/User Datagram Protocol [TCP/UDP]) Analysis

    DTIC Science & Technology

    2015-09-01

    the network Mac8 Medium Access Control ( Mac ) (Ethernet) address observed as destination for outgoing packets subsessionid8 Zero-based index of...15. SUBJECT TERMS tactical networks, data reduction, high-performance computing, data analysis, big data 16. SECURITY CLASSIFICATION OF: 17...Integer index of row cts_deid Device (instrument) Identifier where observation took place cts_collpt Collection point or logical observation point on

  18. Accessible high performance computing solutions for near real-time image processing for time critical applications

    NASA Astrophysics Data System (ADS)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  19. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE PAGES

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  20. I/O load balancing for big data HPC applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Arnab K.; Goyal, Arpit; Wang, Feiyi

    High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutionsmore » typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.« less

  1. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  2. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Time Tagging the Data

    DTIC Science & Technology

    2015-09-01

    this report made use of posttest processing techniques to provide packet-level time tagging with an accuracy close to 3 µs relative to Coordinated...h set of test records. The process described herein made use of posttest processing techniques to provide packet-level time tagging with an accuracy

  3. Clearing your Desk! Software and Data Services for Collaborative Web Based GIS Analysis

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Gichamo, T.; Yildirim, A. A.; Liu, Y.

    2015-12-01

    Can your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.

  4. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will involve the further integration and analysis of this data across the social sciences to facilitate the impacts across the societal domain, including timely analysis to more accurately predict and forecast future climate and environmental state.

  5. High Performance Computing Operations Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cupps, Kimberly C.

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  6. FLAME: A platform for high performance computing of complex systems, applied for three case studies

    DOE PAGES

    Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...

    2011-01-01

    FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.

  7. Self-service for software development projects and HPC activities

    NASA Astrophysics Data System (ADS)

    Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.

    2014-05-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  8. Effect of cooking methods on selected physicochemical and nutritional properties of barlotto bean, chickpea, faba bean, and white kidney bean.

    PubMed

    Güzel, Demet; Sayar, Sedat

    2012-02-01

    The effects of atmospheric pressure cooking (APC) and high-pressure cooking (HPC) on the physicochemical and nutritional properties of barlotto bean, chickpea, faba bean, and white kidney bean were investigated. The hardness of the legumes cooked by APC or HPC were not statistically different (P > 0.05). APC resulted in higher percentage of seed coat splits than HPC. Both cooking methods decreased Hunter "L" value significantly (P < 0.05). The "a" and "b" values of dark-colored seeds decreased after cooking, while these values tended to increase for the light-colored seeds. The total amounts of solid lost from legume seeds were higher after HPC compared with APC. Rapidly digestible starch (RDS) percentages increased considerably after both cooking methods. High pressure cooked legumes resulted in higher levels of resistant starch (RS) but lower levels of slowly digestible starch (SDS) than the atmospheric pressure cooked legumes.

  9. Scaling GDL for Multi-cores to Process Planck HFI Beams Monte Carlo on HPC

    NASA Astrophysics Data System (ADS)

    Coulais, A.; Schellens, M.; Duvert, G.; Park, J.; Arabas, S.; Erard, S.; Roudier, G.; Hivon, E.; Mottet, S.; Laurent, B.; Pinter, M.; Kasradze, N.; Ayad, M.

    2014-05-01

    After reviewing the majors progress done in GDL -now in 0.9.4- on performance and plotting capabilities since ADASS XXI paper (Coulais et al. 2012), we detail how a large code for Planck HFI beams Monte Carlo was successfully transposed from IDL to GDL on HPC.

  10. When to Renew Software Licences at HPC Centres? A Mathematical Analysis

    NASA Astrophysics Data System (ADS)

    Baolai, Ge; MacIsaac, Allan B.

    2010-11-01

    In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less

  12. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  13. Video quality assessment using motion-compensated temporal filtering and manifold feature similarity

    PubMed Central

    Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2017-01-01

    Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489

  14. Fine-needle aspiration cytology of hemangiopericytoma: A report of five cases.

    PubMed

    Chhieng, D; Cohen, J M; Waisman, J; Fernandez, G; Cangiarella, J

    1999-08-25

    Hemangiopericytoma (HPC) is a relatively rare neoplasm, accounting for approximately 2.5% of all soft tissue tumors. Its histopathology has been well documented but to the authors' knowledge reports regarding its fine-needle aspiration (FNA) cytology rarely are encountered. In the current study the authors report the cytologic findings in FNA specimens from nine confirmed cases of HPC and attempt to correlate the cytologic features with the biologic outcomes. FNA was performed with or without radiologic guidance. Corresponding sections of tissue were reviewed in conjunction with the cytologic preparations. Nine FNAs were performed in 5 patients (3 men and 2 women) with an age range of 38-77 years (mean, 56 years). Two lesions were primary soft tissue lesions arising in the lower extremities; seven were recurrent or metastatic lesions from bone (one lesion), kidney (one lesion), pelvic fossa (one lesion), lower extremities (two lesions), trunk (one lesion), and breast (one lesion). All aspirates were cellular and were comprised of single and tightly packed clusters of oval to spindle-shaped cells aggregated around branched capillaries. Basement membrane material was observed in 6 cases (67%). The nuclei were uniform and oval, with finely granular chromatin and inconspicuous nucleoli in all cases except one. No mitotic figures or areas of necrosis were identified. A correct diagnosis of HPC was made on one primary lesion and all recurrent or metastatic lesions. HPCs show a spindle cell pattern in cytologic preparations and must be distinguished from more common spindle cell lesions. The presence of branched capillaries and abundant basement membrane material supports a diagnosis of HPC. Immunohistochemistry and electron microscopy performed on FNA samples may be helpful in the differential diagnosis. FNA is a useful and accurate tool with which to confirm recurrent or metastatic HPC; however, prediction of the biologic behavior of HPC based on cytologic features is not feasible. Cancer (Cancer Cytopathol) Copyright 1999 American Cancer Society.

  15. Evaluation of surface resistivity measurements as an alternative to the rapid chloride permeability test for quality assurance and acceptance : technical summary.

    DOT National Transportation Integrated Search

    2011-07-01

    This project investigated the use of a surface resistivity device as an indication of concretes ability to resist chloride ion penetration for use in quality assurance (QA) and acceptance of high performance concrete (HPC). : The objectives of thi...

  16. GridKit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peles, Slaven

    2016-11-06

    GridKit is a software development kit for interfacing power systems and power grid application software with high performance computing (HPC) libraries developed at National Labs and academia. It is also intended as interoperability layer between different numerical libraries. GridKit is not a standalone application, but comes with a suite of test examples illustrating possible usage.

  17. Bioinformatics and Astrophysics Cluster (BinAc)

    NASA Astrophysics Data System (ADS)

    Krüger, Jens; Lutz, Volker; Bartusch, Felix; Dilling, Werner; Gorska, Anna; Schäfer, Christoph; Walter, Thomas

    2017-09-01

    BinAC provides central high performance computing capacities for bioinformaticians and astrophysicists from the state of Baden-Württemberg. The bwForCluster BinAC is part of the implementation concept for scientific computing for the universities in Baden-Württemberg. Community specific support is offered through the bwHPC-C5 project.

  18. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package

    PubMed Central

    2012-01-01

    Background Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. Results In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Conclusions Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org. PMID:23281941

  19. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    PubMed

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  20. Who watches the watchers?: preventing fault in a fault tolerance library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanavige, C. D.

    The Scalable Checkpoint/Restart library (SCR) was developed and is used by researchers at Lawrence Livermore National Laboratory to provide a fast and efficient method of saving and recovering large applications during runtime on high-performance computing (HPC) systems. Though SCR protects other programs, up until June 2017, nothing was actively protecting SCR. The goal of this project was to automate the building and testing of this library on the varying HPC architectures on which it is used. Our methods centered around the use of a continuous integration tool called Bamboo that allowed for automation agents to be installed on the HPCmore » systems themselves. These agents provided a way for us to establish a new and unique way to automate and customize the allocation of resources and running of tests with CMake’s unit testing framework, CTest, as well as integration testing scripts though an HPC package manager called Spack. These methods provided a parallel environment in which to test the more complex features of SCR. As a result, SCR is now automatically built and tested on several HPC architectures any time changes are made by developers to the library’s source code. The results of these tests are then communicated back to the developers for immediate feedback, allowing them to fix functionality of SCR that may have broken. Hours of developers’ time are now being saved from the tedious process of manually testing and debugging, which saves money and allows the SCR project team to focus their efforts towards development. Thus, HPC system users can use SCR in conjunction with their own applications to efficiently and effectively checkpoint and restart as needed with the assurance that SCR itself is functioning properly.« less

  1. The Effect of Apatinib on the Metabolism of Carvedilol Both in vitro and in vivo.

    PubMed

    Lin, Dan; Wang, Zhe; Li, Junwei; Wang, Li; Wang, Shuanghu; Hu, Guo-Xin; Liu, Xinshe

    2016-01-01

    In light of the growing number of cancer survivors, the incidence of cardiovascular complications in these patients had also increased, while the effect of apatinib on the pharmacokinetic of cardioprotective drug (carvedilol) in rats or human is still unknown. The present work was to study the impact of apatinib on the metabolism of carvedilol both in vitro and vivo. A specific and sensitive ultra-performance liquid-chromatography tandem mass spectrometry method was applied to determine the concentration of carvedilol and its metabolites (4'-hydroxyphenyl carvedilol [4'-HPC], 5'-hydroxyphenyl carvedilol [5'-HPC] and o-desmethyl carvedilol [o-DMC]). The inhibition ratios in human liver microsomes were 10.28, 10.89 and 5.94% for 4'-HPC, 5'-HPC and o-DMC, respectively, while in rat liver microsomes, they were 3.22, 1.58 and 1.81%, respectively. The data in vitro of rat microsomes were consistent with the data in vivo that the inhibition of 4'-HPC and 5'-HPC formation was higher than the control group. Our study showed that apatinib could significantly inhibit the formation of carvedilol metabolites both in human and rat liver microsomes. It is recommended that the effect of apatinib on the metabolism of carvedilol should be noted and carvedilol plasma concentration should be monitored. © 2015 S. Karger AG, Basel.

  2. Collective input/output under memory constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yin; Chen, Yong; Zhuang, Yu

    2014-12-18

    Compared with current high-performance computing (HPC) systems, exascale systems are expected to have much less memory per node, which can significantly reduce necessary collective input/output (I/O) performance. In this study, we introduce a memory-conscious collective I/O strategy that takes into account memory capacity and bandwidth constraints. The new strategy restricts aggregation data traffic within disjointed subgroups, coordinates I/O accesses in intranode and internode layers, and determines I/O aggregators at run time considering memory consumption among processes. We have prototyped the design and evaluated it with commonly used benchmarks to verify its potential. The evaluation results demonstrate that this strategy holdsmore » promise in mitigating the memory pressure, alleviating the contention for memory bandwidth, and improving the I/O performance for projected extreme-scale systems. Given the importance of supporting increasingly data-intensive workloads and projected memory constraints on increasingly larger scale HPC systems, this new memory-conscious collective I/O can have a significant positive impact on scientific discovery productivity.« less

  3. MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.

    PubMed

    Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd

    2018-07-01

    Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.

  4. Peregrine Transition from CentOS6 to CentOS7 | High-Performance Computing |

    Science.gov Websites

    ). Users should consider them primarily as examples, which they can copy and modify for their own use with HPC environments. This can permit one-step access to pre-existing complex software stacks, or /projects. This is not a highly suggested mechanism, but might serve for one-time needs. In the unlikely

  5. Meningeal hemangiopericytoma and solitary fibrous tumors carry the NAB2-STAT6 fusion and can be diagnosed by nuclear expression of STAT6 protein.

    PubMed

    Schweizer, Leonille; Koelsche, Christian; Sahm, Felix; Piro, Rosario M; Capper, David; Reuss, David E; Pusch, Stefan; Habel, Antje; Meyer, Jochen; Göck, Tanja; Jones, David T W; Mawrin, Christian; Schittenhelm, Jens; Becker, Albert; Heim, Stephanie; Simon, Matthias; Herold-Mende, Christel; Mechtersheimer, Gunhild; Paulus, Werner; König, Rainer; Wiestler, Otmar D; Pfister, Stefan M; von Deimling, Andreas

    2013-05-01

    Non-central nervous system hemangiopericytoma (HPC) and solitary fibrous tumor (SFT) are considered by pathologists as two variants of a single tumor entity now subsumed under the entity SFT. Recent detection of frequent NAB2-STAT6 fusions in both, HPC and SFT, provided additional support for this view. On the other hand, current neuropathological practice still distinguishes between HPC and SFT. The present study set out to identify genes involved in the formation of meningeal HPC. We performed exome sequencing and detected the NAB2-STAT6 fusion in DNA of 8/10 meningeal HPC thereby providing evidence of close relationship of these tumors with peripheral SFT. Due to the considerable effort required for exome sequencing, we sought to explore surrogate markers for the NAB2-STAT6 fusion protein. We adopted the Duolink proximity ligation assay and demonstrated the presence of NAB2-STAT6 fusion protein in 17/17 HPC and the absence in 15/15 meningiomas. More practical, presence of the NAB2-STAT6 fusion protein resulted in a strong nuclear signal in STAT6 immunohistochemistry. The nuclear reallocation of STAT6 was detected in 35/37 meningeal HPC and 25/25 meningeal SFT but not in 87 meningiomas representing the most important differential diagnosis. Tissues not harboring the NAB2-STAT6 fusion protein presented with nuclear expression of NAB2 and cytoplasmic expression of STAT6 proteins. In conclusion, we provide strong evidence for meningeal HPC and SFT to constitute variants of a single entity which is defined by NAB2-STAT6 fusion. In addition, we demonstrate that this fusion can be rapidly detected by STAT6 immunohistochemistry which shows a consistent nuclear reallocation. This immunohistochemical assay may prove valuable for the differentiation of HPC and SFT from other mesenchymal neoplasms.

  6. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  7. Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mustain, Christopher J.

    The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.

  8. Towards real-time remote processing of laparoscopic video

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  9. Innovative HPC architectures for the study of planetary plasma environments

    NASA Astrophysics Data System (ADS)

    Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni

    2016-04-01

    DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the plasma sheet and the magnetotail, and the variation of ion/electron plasma flows when crossing these frontiers. The simulations also give access to detailed information about the particle dynamics and their velocity distribution at locations that can be used for comparison with satellite data.

  10. High-pressure coolant effect on the surface integrity of machining titanium alloy Ti-6Al-4V: a review

    NASA Astrophysics Data System (ADS)

    Liu, Wentao; Liu, Zhanqiang

    2018-03-01

    Machinability improvement of titanium alloy Ti-6Al-4V is a challenging work in academic and industrial applications owing to its low thermal conductivity, low elasticity modulus and high chemical affinity at high temperatures. Surface integrity of titanium alloys Ti-6Al-4V is prominent in estimating the quality of machined components. The surface topography (surface defects and surface roughness) and the residual stress induced by machining Ti-6Al-4V occupy pivotal roles for the sustainability of Ti-6Al-4V components. High-pressure coolant (HPC) is a potential choice in meeting the requirements for the manufacture and application of Ti-6Al-4V. This paper reviews the progress towards the improvements of Ti-6Al4V surface integrity under HPC. Various researches of surface integrity characteristics have been reported. In particularly, surface roughness, surface defects, residual stress as well as work hardening are investigated in order to evaluate the machined surface qualities. Several coolant parameters (including coolant type, coolant pressure and the injection position) deserve investigating to provide the guidance for a satisfied machined surface. The review also provides a clear roadmap for applications of HPC in machining Ti-6Al4V. Experimental studies and analysis are reviewed to better understand the surface integrity under HPC machining process. A distinct discussion has been presented regarding the limitations and highlights of the prospective for machining Ti-6Al4V under HPC.

  11. Effect of crospovidone and hydroxypropyl cellulose on carbamazepine in high-dose tablet formulation.

    PubMed

    Flicker, Felicia; Betz, Gabriele

    2012-06-01

    The aim of this study was to develop a high-dose tablet formulation of the poorly soluble carbamazepine (CBZ) with sufficient tablet hardness and immediate drug release. A further aim was to investigate the influence of various commercial CBZ raw materials on the optimized tablet formulation. Hydroxypropyl cellulose (HPC-SL) was selected as a dry binder and crospovidone (CrosPVP) as a superdisintegrant. A direct compacted tablet formulation of 70% CBZ was optimized by a 3² full factorial design with two input variables, HPC (0--10%) and CrosPVP (0--5%). Response variables included disintegration time, amount of drug released at 15 and 60 min, and tablet hardness, all analyzed according to USP 31. Increasing HPC-SL together with CrosPVP not only increased tablet hardness but also reduced disintegration time. Optimal condition was achieved in the range of 5--9% HPC and 3--5% CrosPVP, where tablet properties were at least 70 N tablet hardness, less than 1 min disintegration, and within the USP requirements for drug release. Testing the optimized formulation with four different commercial CBZ samples, their variability was still observed. Nonetheless, all formulations conformed to the USP specifications. With the excipients CrosPVP and HPC-SL an immediate release tablet formulation was successfully formulated for high-dose CBZ of various commercial sources.

  12. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.

  13. Hierarchical porous carbon/MnO2 hybrids as supercapacitor electrodes.

    PubMed

    Lee, Min Eui; Yun, Young Soo; Jin, Hyoung-Joon

    2014-12-01

    Hybrid electrodes of hierarchical porous carbon (HPC) and manganese oxide (MnO2) were synthesized using a fast surface redox reaction of potassium permanganate under facile immersion methods. The HPC/MnO2 hybrids had a number of micropores and macropores and the MnO2 nanoparticles acted as a pseudocapacitive material. The synergistic effects of electric double-layer capacitor (EDLC)-induced capacitance and pseudocapacitance brought about a better electrochemical performance of the HPC/MnO2 hybrid electrodes compared to that obtained with a single component. The hybrids showed a specific capacitance of 228 F g(-1) and good cycle stability over 1000 cycles.

  14. Repeated hematopoietic stem and progenitor cell mobilization without depletion of the bone marrow stem and progenitor cell pool in mice after repeated administration of recombinant murine G-CSF.

    PubMed

    de Kruijf, Evert-Jan F M; van Pel, Melissa; Hagoort, Henny; Kruysdijk, Donnée; Molineux, Graham; Willemze, Roel; Fibbe, Willem E

    2007-05-01

    Administration of recombinant-human G-CSF (rhG-CSF) is highly efficient in mobilizing hematopoietic stem and progenitor cells (HSC/HPC) from the bone marrow (BM) toward the peripheral blood. This study was designed to investigate whether repeated G-CSF-induced HSC/HPC mobilization in mice could lead to a depletion of the bone marrow HSC/HPC pool with subsequent loss of mobilizing capacity. To test this hypothesis Balb/c mice were treated with a maximum of 12 repeated 5-day cycles of either 10 microg rhG-CSF/day or 0.25 microg rmG-CSF/day. Repeated administration of rhG-CSF lead to strong inhibition of HSC/HPC mobilization toward the peripheral blood and spleen after >4 cycles because of the induction of anti-rhG-CSF antibodies. In contrast, after repeated administration of rmG-CSF, HSC/HPC mobilizing capacity remained intact for up to 12 cycles. The number of CFU-GM per femur did not significantly change for up to 12 cycles. We conclude that repeated administration of G-CSF does not lead to depletion of the bone marrow HSC/HPC pool.

  15. Measurement of the Rheological Properties of High Performance Concrete: State of the Art Report

    PubMed Central

    Ferraris, Chiara F.

    1999-01-01

    The rheological or flow properties of concrete in general and of high performance concrete (HPC) in particular, are important because many factors such as ease of placement, consolidation, durability, and strength depend on the flow properties. Concrete that is not properly consolidated may have defects, such as honeycombs, air voids, and aggregate segregation. Such an important performance attribute has triggered the design of numerous test methods. Generally, the flow behavior of concrete approximates that of a Bingham fluid. Therefore, at least two parameters, yield stress and viscosity, are necessary to characterize the flow. Nevertheless, most methods measure only one parameter. Predictions of the flow properties of concrete from its composition or from the properties of its components are not easy. No general model exists, although some attempts have been made. This paper gives an overview of the flow properties of a fluid or a suspension, followed by a critical review of the most commonly used concrete rheology tests. Particular attention is given to tests that could be used for HPC. Tentative definitions of terms such as workability, consistency, and rheological parameters are provided. An overview of the most promising tests and models for cement paste is given.

  16. A Framework for Debugging Geoscience Projects in a High Performance Computing Environment

    NASA Astrophysics Data System (ADS)

    Baxter, C.; Matott, L.

    2012-12-01

    High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.

  17. Adaptation of a Multi-Block Structured Solver for Effective Use in a Hybrid CPU/GPU Massively Parallel Environment

    NASA Astrophysics Data System (ADS)

    Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain

    2014-11-01

    Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.

  18. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios

    DTIC Science & Technology

    2013-12-01

    authors present a Computing on Dissemination with predictable contacts ( pCoD ) algorithm, since it is impossible to reserve task execution time in advance...Computing While Charging DAG Directed Acyclic Graph 18 TTL Time-to-live pCoD Predictable contacts CoD Computing on Dissemination upCoD Unpredictable

  19. Running R Statistical Computing Environment Software on the Peregrine

    Science.gov Websites

    for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing

  20. Data Retention Policy | High-Performance Computing | NREL

    Science.gov Websites

    HPC Data Retention Policy. File storage areas on Peregrine and Gyrfalcon are either user-centric to reclaim storage. We can make special arrangements for permanent storage, if needed. User-Centric > is 3 months after the last project ends. During this retention period, the user may log in to

  1. Peregrine Software Toolchains | High-Performance Computing | NREL

    Science.gov Websites

    toolchain is an open-source alternative against which many technical applications are natively developed and tested. The Portland Group compilers are not fully supported, but are available to the HPC community. Use Group (PGI) C/C++ and Fortran (partially supported) The PGI Accelerator compilers include NVIDIA GPU

  2. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Packet-Level Analysis

    DTIC Science & Technology

    2015-09-01

    with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS . 1...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS (ES) Technical and Project Engineering, LLC QED Systems, LLC Alexandria, VA...AND ADDRESS (ES) US Army Research Laboratory ATTN: RDRL-CIH-C Aberdeen Proving Ground, MD 21005 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR

  3. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  4. PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.

    2017-12-01

    Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.

  5. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    NASA Astrophysics Data System (ADS)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  6. Assessment of Drinking Water Quality from Bottled Water Coolers

    PubMed Central

    FARHADKHANI, Marzieh; NIKAEEN, Mahnaz; AKBARI ADERGANI, Behrouz; HATAMZADEH, Maryam; NABAVI, Bibi Fatemeh; HASSANZADEH, Akbar

    2014-01-01

    Abstract Background Drinking water quality can be deteriorated by microbial and toxic chemicals during transport, storage and handling before using by the consumer. This study was conducted to evaluate the microbial and physicochemical quality of drinking water from bottled water coolers. Methods A total of 64 water samples, over a 5-month period in 2012-2013, were collected from free standing bottled water coolers and water taps in Isfahan. Water samples were analyzed for heterotrophic plate count (HPC), temperature, pH, residual chlorine, turbidity, electrical conductivity (EC) and total organic carbon (TOC). Identification of predominant bacteria was also performed by sequence analysis of 16S rDNA. Results The mean HPC of water coolers was determined at 38864 CFU/ml which exceeded the acceptable level for drinking water in 62% of analyzed samples. The HPC from the water coolers was also found to be significantly (P < 0.05) higher than that of the tap waters. The statistical analysis showed no significant difference between the values of pH, EC, turbidity and TOC in water coolers and tap waters. According to sequence analysis eleven species of bacteria were identified. Conclusion A high HPC is indicative of microbial water quality deterioration in water coolers. The presence of some opportunistic pathogens in water coolers, furthermore, is a concern from a public health point of view. The results highlight the importance of a periodic disinfection procedure and monitoring system for water coolers in order to keep the level of microbial contamination under control. PMID:26060769

  7. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  8. Lightweight Provenance Service for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. Themore » need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.« less

  9. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    PubMed

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  10. Implementation of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  11. Implementation of BT, SP, LU, and FT of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Schultz, Matthew; Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of Java features make it an attractive but a debatable choice for High Performance Computing. We have implemented benchmarks working on single structured grid BT,SP,LU and FT in Java. The performance and scalability of the Java code shows that a significant improvement in Java compiler technology and in Java thread implementation are necessary for Java to compete with Fortran in HPC applications.

  12. Radiation-Induced Chemical Reactions in Hydrogel of Hydroxypropyl Cellulose (HPC): A Pulse Radiolysis Study.

    PubMed

    Yamashita, Shinichi; Ma, Jun; Marignier, Jean-Louis; Hiroki, Akihiro; Taguchi, Mitsumasa; Mostafavi, Mehran; Katsumura, Yosuke

    2016-12-01

    We performed studies on pulse radiolysis of highly transparent and shape-stable hydrogels of hydroxypropyl cellulose (HPC) that were prepared using a radiation-crosslinking technique. Several fundamental aspects of radiation-induced chemical reactions in the hydrogels were investigated. With radiation doses less than 1 kGy, degradation of the HPC matrix was not observed. The rate constants of the HPC composing the matrix, with two water decomposition radicals [hydroxyl radical ( • OH) and hydrated electron ([Formula: see text])] in the gels, were determined to be 4.5 × 10 9 and 1.8 × 10 7 M -1 s -1 , respectively. Direct ionization of HPC in the matrix slightly increased the initial yield of [Formula: see text], but the additionally produced amount of [Formula: see text] disappeared immediately within 200 ps, indicating fast recombination of [Formula: see text] with hole radicals on HPC or on surrounding hydration water molecules. Reactions of [Formula: see text] with nitrous oxide (N 2 O) and nitromethane (CH 3 NO 2 ) were also examined. Decay of [Formula: see text] due to scavenging by N 2 O and CH 3 NO 2 were both slower in hydrogels than in aqueous solutions, showing slower diffusions of the reactants in the gel matrix. The degree of decrease in the decay rate was more effective for N 2 O than for CH 3 NO 2 , revealing lower solubility of N 2 O in gel than in water. It is known that in viscous solvents, such as ethylene glycol, CH 3 NO 2 exhibits a transient effect, which is a fast reaction over the contact distance of reactants and occurs without diffusions of reactants. However, such an effect was not observed in the hydrogel used in the current study. In addition, the initial yield of [Formula: see text], which is affected by the amount of the scavenged precursor of [Formula: see text], in hydrogel containing N 2 O was slightly higher than that in water containing N 2 O, and the same tendency was found for CH 3 NO 2 .

  13. Position Paper - pFLogger: The Parallel Fortran Logging framework for HPC Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or logger) similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  14. POSITION PAPER - pFLogger: The Parallel Fortran Logging Framework for HPC Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or 'logger') similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger - a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  15. Transcriptome mining of immune-related genes in the muricid snail Concholepas concholepas.

    PubMed

    Détrée, Camille; López-Landavery, Edgar; Gallardo-Escárate, Cristian; Lafarga-De la Cruz, Fabiola

    2017-12-01

    The population of the Chilean endemic marine gastropod Concholepas concholepas locally called "loco" has dramatically decreased in the past 50 years as a result of intense activity of local fisheries and high environmental variability observed along the Chilean coast, including episodes of hypoxia, changes in sea surface temperature, ocean acidification and diseases. In this study, we set out to explore the molecular basis of C. concholepas to cope with biotic stressors such as exposure to the pathogenic bacterium Vibrio anguillarum. Here, 454pyrosequencing was conducted and 61 transcripts related to the immune response in this muricid species were identified. Among these, the expression of six genes (CcNFκβ, CcIκβ, CcLITAF, CcTLR, CcCas8 and CcCath) involved in the regulation of inflammatory, apoptotic and immune processes upon stimuli, were evaluated during the first 33 h post challenge (hpc). The results showed that CcTLR, CcCas8 and CcCath have an initial response at 4 hpc, evidencing an up-regulation from 4 to 24 hpc. Notably, the response of CcNFKB occurred 2 h later with a statistically significant up-regulation at 6 hpc and 10 hpc. Furthermore, the challenge with V. anguillarum induced a statistically significant down-regulation of CcIKB between 2 and 10 hpc as well as a down-regulation of CcLITAF between 2 and 4 hpc followed in both cases by an up-regulation between 24 and 33 hpc. This work describes the first transcriptomic effort to characterize the immune response of C. concholepas and constitutes a valuable transcriptomic resource for future efforts to develop sustainable aquaculture and conservations tools for this endemic marine snail species. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. OCCAM: a flexible, multi-purpose and extendable HPC cluster

    NASA Astrophysics Data System (ADS)

    Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.

    2017-10-01

    The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.

  17. Global Simulation of Bioenergy Crop Productivity: Analytical Framework and Case Study for Switchgrass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Shujiang; Kline, Keith L; Nair, S. Surendran

    A global energy crop productivity model that provides geospatially explicit quantitative details on biomass potential and factors affecting sustainability would be useful, but does not exist now. This study describes a modeling platform capable of meeting many challenges associated with global-scale agro-ecosystem modeling. We designed an analytical framework for bioenergy crops consisting of six major components: (i) standardized natural resources datasets, (ii) global field-trial data and crop management practices, (iii) simulation units and management scenarios, (iv) model calibration and validation, (v) high-performance computing (HPC) simulation, and (vi) simulation output processing and analysis. The HPC-Environmental Policy Integrated Climate (HPC-EPIC) model simulatedmore » a perennial bioenergy crop, switchgrass (Panicum virgatum L.), estimating feedstock production potentials and effects across the globe. This modeling platform can assess soil C sequestration, net greenhouse gas (GHG) emissions, nonpoint source pollution (e.g., nutrient and pesticide loss), and energy exchange with the atmosphere. It can be expanded to include additional bioenergy crops (e.g., miscanthus, energy cane, and agave) and food crops under different management scenarios. The platform and switchgrass field-trial dataset are available to support global analysis of biomass feedstock production potential and corresponding metrics of sustainability.« less

  18. Preparation and characterization of sustained-release rotigotine film-forming gel.

    PubMed

    Li, Xiang; Zhang, Renyu; Liang, Rongcai; Liu, Wei; Wang, Chenhui; Su, Zhengxing; Sun, Fengying; Li, Youxin

    2014-01-02

    The aim of this study was to develop a film-forming gel formulation of rotigotine with hydroxypropyl cellulose (HPC) and Carbomer 934. To optimize this formulation, we applied the Response Surface Analysis technique and evaluated the gel's pharmacokinetic properties. The factors chosen for factorial design were the concentration of rotigotine, the proportion of HPC and Carbomer 934, and the concentration of ST-Elastomer 10. Each factor was varied over three levels: low, medium and high. The gel formulation was evaluated and optimized according to its accumulated permeation rate (Flux) through Franz-type diffusion. A pharmacokinetic study of rotigotine gel was performed with rabbits. The Flux of the optimized formulation reached the maximum (199.17 μg/cm(2)), which was 3% rotigotine and 7% ST-Elastomer 10 with optimal composition of HPC: Carbomer 934 (5:1). The bioavailability of the optimized formulation compared with intravenous administration was approximately 20%. A film-forming gel of rotigotine was successfully developed using the response surface analysis technique. The results of this study may be helpful in finding an optimum formulation for transdermal delivery of a drug. The product may improve patients' compliance and provide better efficacy. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less

  20. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  1. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into computational science and engineering codes. Finally, we are partnering with the lead PTP developers at IBM, to ensure we are as effective as possible within the Eclipse community development. We are also conducting training and outreach to our user community, including conference BOF sessions, monthly user calls, and an annual user meeting, so that we can best inform the improvements we make to Eclipse PTP. With these activities we endeavor to encourage use of modern software engineering practices, as enabled through the Eclipse IDE, with computational science and engineering applications. These practices include proper use of source code repositories, tracking and rectifying issues, measuring and monitoring code performance changes against both optimizations as well as ever-changing software stacks and configurations on HPC systems, as well as ultimately encouraging development and maintenance of testing suites -- things that have become commonplace in many software endeavors, but have lagged in the development of science applications. We view that the challenge with the increased complexity of both HPC systems and science applications demands the use of better software engineering methods, preferably enabled by modern tools such as Eclipse PTP, to help the computational science community thrive as we evolve the HPC landscape.

  2. Temperature dependencies of Henry’s law constants for different plant sesquiterpenes

    PubMed Central

    Copolovici, Lucian; Niinemets, Ülo

    2018-01-01

    Sesquiterpenes are plant-produced hydrocarbons with important ecological functions in plant-to-plant and plant-to-insect communication, but due to their high reactivity they can also play a significant role in atmospheric chemistry. So far, there is little information of gas/liquid phase partition coefficients (Henry’s law constants) and their temperature dependencies for sesquiterpenes, but this information is needed for quantitative simulation of the release of sesquiterpenes from plants and modeling atmospheric reactions in different phases. In this study, we estimated Henry’s law constants (Hpc) and their temperature responses for 12 key plant sesquiterpenes with varying structure (aliphatic, mono-, bi- and tricyclic sesquiterpenes). At 25 °C, Henry’s law constants varied 1.4-fold among different sesquiterpenes, and the values were within the range previously observed for monocyclic monoterpenes. Hpc of sesquiterpenes exhibited a high rate of increase, on average ca. 1.5-fold with a 10 °C increase in temperature (Q10). The values of Q10 varied 1.2-fold among different sesquiterpenes. Overall, these data demonstrate moderately high variation in Hpc values and Hpc temperature responses among different sesquiterpenes. We argue that these variations can importantly alter the emission kinetics of sesquiterpenes from plants. PMID:26291755

  3. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

  4. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  5. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  6. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  7. Peregrine System User Basics | High-Performance Computing | NREL

    Science.gov Websites

    peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y

  8. Factors predicting haematopoietic recovery in patients undergoing autologous transplantation: 11-year experience from a single centre.

    PubMed

    Bai, Lijun; Xia, Wei; Wong, Kelly; Reid, Cassandra; Ward, Christopher; Greenwood, Matthew

    2014-10-01

    Engraftment outcomes following autologous transplantation correlate poorly to infused stem cell number. We evaluated 446 consecutive patients who underwent autologous transplantation at our centre between 2001 and 2012. The impact of pre-transplant and collection factors together with CD34(+) dosing ranges on engraftment, hospital length of stay (LOS) and survival endpoints were assessed in order to identify factors which might be optimized to improve outcomes for patients undergoing autologous transplantation using haemopoietic progenitor cells-apheresis (HPC-A). Infused CD34(+) cell dose correlated to platelet but not neutrophil recovery. Time to platelet engraftment was significantly delayed in those receiving low versus medium or high CD34(+) doses. Non-remission status was associated with slower neutrophil and platelet recovery. Increasing neutrophil contamination of HPC-A was strongly associated with slower neutrophil recovery with infused neutrophil dose/kg recipient body weight ≥3 × 10(8)/kg having a significant impact on time to neutrophil engraftment (p = 0.001). Higher neutrophil doses/kg in HPC-A were associated with days of granulocyte colony stimulation factor (G-CSF) use, HPC-A volumes >500 ml and higher NCC in HPC-A. High infused neutrophil dose/kg and age >65 years were associated with longer hospital LOS (p = 0.002 and 0.011 respectively). Only age, disease and disease status predicted disease-free survival (DFS) and overall survival (OS) in our cohort (p < 0.005). Non-relapse mortality was not affected by low dose of CD34(+) (<2 × 10(6)/kg). In conclusion, our study shows that CD34(+) remains a useful and convenient marker for assessing haemotopoietic stem cell content and overall engraftment capacity post-transplant. Neutrophil contamination of HPC-A appears to be a key factor delaying neutrophil recovery. Steps to minimize the degree of neutrophil contamination in HPC-A product may be associated with more rapid neutrophil engraftment and reduced hospital LOS.

  9. Combining high-resolution gross domestic product data with home and personal care product market research data to generate a subnational emission inventory for Asia.

    PubMed

    Hodges, Juliet Elizabeth Natasha; Vamshi, Raghu; Holmes, Christopher; Rowson, Matthew; Miah, Taqmina; Price, Oliver Richard

    2014-04-01

    Environmental risk assessment of chemicals is reliant on good estimates of product usage information and robust exposure models. Over the past 20 to 30 years, much progress has been made with the development of exposure models that simulate the transport and distribution of chemicals in the environment. However, little progress has been made in our ability to estimate chemical emissions of home and personal care (HPC) products. In this project, we have developed an approach to estimate subnational emission inventory of chemical ingredients used in HPC products for 12 Asian countries including Bangladesh, Cambodia, China, India, Indonesia, Laos, Malaysia, Pakistan, Philippines, Sri Lanka, Thailand, and Vietnam (Asia-12). To develop this inventory, we have coupled a 1 km grid of per capita gross domestic product (GDP) estimates with market research data of HPC product sales. We explore the necessity of accounting for a population's ability to purchase HPC products in determining their subnational distribution in regions where wealth is not uniform. The implications of using high resolution data on inter- and intracountry subnational emission estimates for a range of hypothetical and actual HPC product types were explored. It was demonstrated that for low value products (<500 US$ per capita/annum required to purchase product) the maximum deviation from baseline (emission distributed via population) is less than a factor of 3 and it would not result in significant differences in chemical risk assessments. However, for other product types (>500 US$ per capita/annum required to purchase product) the implications on emissions being assigned to subnational regions can vary by several orders of magnitude. The implications of this on conducting national or regional level risk assessments may be significant. Further work is needed to explore the implications of this variability in HPC emissions to enable the HPC industry and/or governments to advance risk-based chemical management policies in emerging markets. © 2013 SETAC.

  10. Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE

    NASA Astrophysics Data System (ADS)

    Fasel, Markus

    2016-10-01

    High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.

  11. 75 FR 14379 - Airworthiness Directives; Rolls-Royce Deutschland Ltd & Co KG (RRD) Models Tay 620-15, Tay 650-15...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ...: Following a review of operational data of the Tay 651-54 engine, it has been found that the actual stress... found that the actual stress levels in the Tay 651-54 engine High Pressure Compressor (HPC) stages 1, 3... actual stress levels in the Tay 651-54 engine High Pressure Compressor (HPC) stages 1, 3, 6, 7 and 12...

  12. 76 FR 41144 - Airworthiness Directives; Pratt & Whitney Corp. (PW) JT9D-7R4H1 Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ...-7R4H1 turbofan engines. This proposed AD would require removing certain high-pressure compressor (HPC...) Applicability Pratt & Whitney Corp (PW) JT9D-7R4H1 turbofan engines with a high-pressure compressor (HPC) shaft... the rear shaft. These engines have the highest-thrust rating of the JT9D models, and were operating in...

  13. Workload Characterization of a Leadership Class Storage Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M

    2010-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less

  14. Preparation and Characterization of All-Biomass Soy Protein Isolate-Based Films Enhanced by Epoxy Castor Oil Acid Sodium and Hydroxypropyl Cellulose

    PubMed Central

    Wang, La; Li, Jianzhang; Zhang, Shifeng; Shi, Junyou

    2016-01-01

    All-biomass soy protein-based films were prepared using soy protein isolate (SPI), glycerol, hydroxypropyl cellulose (HPC) and epoxy castor oil acid sodium (ECOS). The effect of the incorporated HPC and ECOS on the properties of the SPI film was investigated. The experimental results showed that the tensile strength of the resultant films increased from 2.84 MPa (control) to 4.04 MPa and the elongation at break increased by 22.7% when the SPI was modified with 2% HPC and 10% ECOS. The increased tensile strength resulted from the reaction between the ECOS and SPI, which was confirmed by attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR), scanning electron microscopy (SEM) and X-ray diffraction analysis (XRD). It was found that ECOS and HPC effectively improved the performance of SPI-based films, which can provide a new method for preparing environmentally-friendly polymer films for a number of commercial applications. PMID:28773320

  15. Preparation and Characterization of All-Biomass Soy Protein Isolate-Based Films Enhanced by Epoxy Castor Oil Acid Sodium and Hydroxypropyl Cellulose.

    PubMed

    Wang, La; Li, Jianzhang; Zhang, Shifeng; Shi, Junyou

    2016-03-15

    All-biomass soy protein-based films were prepared using soy protein isolate (SPI), glycerol, hydroxypropyl cellulose (HPC) and epoxy castor oil acid sodium (ECOS). The effect of the incorporated HPC and ECOS on the properties of the SPI film was investigated. The experimental results showed that the tensile strength of the resultant films increased from 2.84 MPa (control) to 4.04 MPa and the elongation at break increased by 22.7% when the SPI was modified with 2% HPC and 10% ECOS. The increased tensile strength resulted from the reaction between the ECOS and SPI, which was confirmed by attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR), scanning electron microscopy (SEM) and X-ray diffraction analysis (XRD). It was found that ECOS and HPC effectively improved the performance of SPI-based films, which can provide a new method for preparing environmentally-friendly polymer films for a number of commercial applications.

  16. Understanding I/O workload characteristics of a Peta-scale storage system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less

  17. On Parallelizing Single Dynamic Simulation Using HPC Techniques and APIs of Commercial Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diao, Ruisheng; Jin, Shuangshuang; Howell, Frederic

    Time-domain simulations are heavily used in today’s planning and operation practices to assess power system transient stability and post-transient voltage/frequency profiles following severe contingencies to comply with industry standards. Because of the increased modeling complexity, it is several times slower than real time for state-of-the-art commercial packages to complete a dynamic simulation for a large-scale model. With the growing stochastic behavior introduced by emerging technologies, power industry has seen a growing need for performing security assessment in real time. This paper presents a parallel implementation framework to speed up a single dynamic simulation by leveraging the existing stability model librarymore » in commercial tools through their application programming interfaces (APIs). Several high performance computing (HPC) techniques are explored such as parallelizing the calculation of generator current injection, identifying fast linear solvers for network solution, and parallelizing data outputs when interacting with APIs in the commercial package, TSAT. The proposed method has been tested on a WECC planning base case with detailed synchronous generator models and exhibits outstanding scalable performance with sufficient accuracy.« less

  18. Software Issues in High-Performance Computing and a Framework for the Development of HPC Applications

    DTIC Science & Technology

    1995-01-01

    possible to determine communication points. For this version, a C program spawning Posix threads and using semaphores to synchronize would have to...performance such as the time required for network communication and synchronization as well as issues of asynchrony and memory hierarchy. For example...enhances reusability. Process (or task) parallel computations can also be succinctly expressed with a small set of process creation and synchronization

  19. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  20. The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems

    DTIC Science & Technology

    2011-11-01

    accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware

  1. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  2. Spindle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-04-04

    Spindle is software infrastructure that solves file system scalabiltiy problems associated with starting dynamically linked applications in HPC environments. When an HPC applications starts up thousands of pricesses at once, and those processes simultaneously access a shared file system to look for shared libraries, it can cause significant performance problems for both the application and other users. Spindle scalably coordinates the distribution of shared libraries to an application to avoid hammering the shared file system.

  3. DYRK1A Is a Regulator of S-Phase Entry in Hepatic Progenitor Cells.

    PubMed

    Kruitwagen, Hedwig S; Westendorp, Bart; Viebahn, Cornelia S; Post, Krista; van Wolferen, Monique E; Oosterhoff, Loes A; Egan, David A; Delabar, Jean-Maurice; Toussaint, Mathilda J; Schotanus, Baukje A; de Bruin, Alain; Rothuizen, Jan; Penning, Louis C; Spee, Bart

    2018-01-15

    Hepatic progenitor cells (HPCs) are adult liver stem cells that act as second line of defense in liver regeneration. They are normally quiescent, but in case of severe liver damage, HPC proliferation is triggered by external activation mechanisms from their niche. Although several important proproliferative mechanisms have been described, it is not known which key intracellular regulators govern the switch between HPC quiescence and active cell cycle. We performed a high-throughput kinome small interfering RNA (siRNA) screen in HepaRG cells, a HPC-like cell line, and evaluated the effect on proliferation with a 5-ethynyl-2'-deoxyuridine (EdU) incorporation assay. One hit increased the percentage of EdU-positive cells after knockdown: dual specificity tyrosine phosphorylation regulated kinase 1A (DYRK1A). Although upon DYRK1A silencing, the percentage of EdU- and phosphorylated histone H3 (pH3)-positive cells was increased, and total cell numbers were not increased, possibly through a subsequent delay in cell cycle progression. This phenotype was confirmed with chemical inhibition of DYRK1A using harmine and with primary HPCs cultured as liver organoids. DYRK1A inhibition impaired Dimerization Partner, RB-like, E2F, and multivulva class B (DREAM) complex formation in HPCs and abolished its transcriptional repression on cell cycle progression. To further analyze DYRK1A function in HPC proliferation, liver organoid cultures were established from mBACtgDyrk1A mice, which harbor one extra copy of the murine Dyrk1a gene (Dyrk+++). Dyrk+++ organoids had both a reduced percentage of EdU-positive cells and reduced proliferation compared with wild-type organoids. This study provides evidence for an essential role of DYRK1A as balanced regulator of S-phase entry in HPCs. An exact gene dosage is crucial, as both DYRK1A deficiency and overexpression affect HPC cell cycle progression.

  4. A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments

    NASA Astrophysics Data System (ADS)

    Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue

    2013-03-01

    The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.

  5. High-energy supercapacitors based on hierarchical porous carbon with an ultrahigh ion-accessible surface area in ionic liquid electrolytes

    NASA Astrophysics Data System (ADS)

    Zhong, Hui; Xu, Fei; Li, Zenghui; Fu, Ruowen; Wu, Dingcai

    2013-05-01

    A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m2 g-1 shows an extremely high energy density, i.e., 118 W h kg-1 at a power density of 100 W kg-1. This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer.A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m2 g-1 shows an extremely high energy density, i.e., 118 W h kg-1 at a power density of 100 W kg-1. This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer. Electronic supplementary information (ESI) available: Sample preparation, material characterization, electrochemical characterization and specific mass capacitance and energy density. See DOI: 10.1039/c3nr00738c

  6. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  7. Unmet needs for analyzing biological big data: A survey of 704 NSF principal investigators.

    PubMed

    Barone, Lindsay; Williams, Jason; Micklos, David

    2017-10-01

    In a 2016 survey of 704 National Science Foundation (NSF) Biological Sciences Directorate principal investigators (BIO PIs), nearly 90% indicated they are currently or will soon be analyzing large data sets. BIO PIs considered a range of computational needs important to their work, including high performance computing (HPC), bioinformatics support, multistep workflows, updated analysis software, and the ability to store, share, and publish data. Previous studies in the United States and Canada emphasized infrastructure needs. However, BIO PIs said the most pressing unmet needs are training in data integration, data management, and scaling analyses for HPC-acknowledging that data science skills will be required to build a deeper understanding of life. This portends a growing data knowledge gap in biology and challenges institutions and funding agencies to redouble their support for computational training in biology.

  8. Multifocal hepatic cystic mass as first manifestation of metastatic spinal hemangiopericytoma

    PubMed Central

    Balibrea, José M.; Rovira-Argelagués, Montserrat; Otero-Piñeiro, Ana M.; Julián, Juan F.; Carrato, Cristina; Navinés, Jordi; Sánchez, M. Carmen; Fernández-Llamazares, Jaime

    2012-01-01

    INTRODUCTION Hemangiopericytomas (HPCs) are rare vascular tumors with a high malignant potential. Hepatic metastases from HPC are very infrequent and usually show a distinctive solid aspect with a surrounding pseudocapsule. PRESENTATION OF CASE A 37-year-old man with a previous medical history of recurrent spinal hemangiopericytoma with a 9 cm × 7 cm cystic hepatic mass detected on follow-up. Contrast enhanced US and MRI confirmed the presence the lesion showing mixed (solid and cystic) content. Parasitic and viral serology plus serum tumoral markers (CEA, ca 19.9, ca 125, AFP) tests, upper and lower endoscopy and general laboratory tests were normal and extended left lobectomy was performed. Histopathologic study confirmed the diagnosis of multifocal metastasic hemangiopericytoma with moderate CD-34, CD-99 and Bcl-2 positivity after immunohistochemical staining. After 1-year follow-up the patient does not present any evidence of abdominal recurrence but a skull base recurrence has been detected. DISCUSSION Liver metastasis from spinal HPC are uncommon and do not have cystic appearance so radiologic diagnosis can be challenging. In spite of the presence of previously diagnosed HPC context, the presence of a liver cystic mass in a young patient makes necessary to discard a number much more frequent benign and malignant diagnosis before metastatic disease can be confirmed. CONCLUSION The presence of a cystic hepatic mass makes it mandatory to rule out a number neoplasms other than metastasic HPC before a definitive diagnosis is made. In addition to local radiotherapy and antiangiogenic agents, surgery can be useful to treat liver dissemination. PMID:23103627

  9. Unified Performance and Power Modeling of Scientific Workloads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shuaiwen; Barker, Kevin J.; Kerbyson, Darren J.

    2013-11-17

    It is expected that scientific applications executing on future large-scale HPC must be optimized not only in terms of performance, but also in terms of power consumption. As power and energy become increasingly constrained resources, researchers and developers must have access to tools that will allow for accurate prediction of both performance and power consumption. Reasoning about performance and power consumption in concert will be critical for achieving maximum utilization of limited resources on future HPC systems. To this end, we present a unified performance and power model for the Nek-Bone mini-application developed as part of the DOE's CESAR Exascalemore » Co-Design Center. Our models consider the impact of computation, point-to-point communication, and collective communication« less

  10. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    NASA Technical Reports Server (NTRS)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  11. Evaluation of FPGA to PC feedback loop

    NASA Astrophysics Data System (ADS)

    Linczuk, Pawel; Zabolotny, Wojciech M.; Wojenski, Andrzej; Krawczyk, Rafal D.; Pozniak, Krzysztof T.; Chernyshova, Maryna; Czarski, Tomasz; Gaska, Michal; Kasprowicz, Grzegorz; Kowalska-Strzeciwilk, Ewa; Malinowski, Karol

    2017-08-01

    The paper presents the evaluation study of the performance of the data transmission subsystem which can be used in High Energy Physics (HEP) and other High-Performance Computing (HPC) systems. The test environment consisted of Xilinx Artix-7 FPGA and server-grade PC connected via the PCIe 4xGen2 bus. The DMA engine was based on the Xilinx DMA for PCI Express Subsystem1 controlled by the modified Xilinx XDMA kernel driver.2 The research is focused on the influence of the system configuration on achievable throughput and latency of data transfer.

  12. Comparison of High Performance Network Options: EDR InfiniBand vs.100Gb RDMA Capable Ethernet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kachelmeier, Luke Anthony; Van Wig, Faith Virginia; Erickson, Kari Natania

    These are the slides for a presentation at the HPC Mini Showcase. This is a comparison of two different high performance network options: EDR InfiniBand and 100Gb RDMA capable ethernet. The conclusion of this comparison is the following: there is good potential, as shown with the direct results; 100Gb technology is too new and not standardized, thus deployment effort is complex for both options; different companies are not necessarily compatible; if you want 100Gb/s, you must get it all from one place.

  13. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Wasserman, Harvey

    2014-04-30

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  14. Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen; Duffy, Daniel; Spear, Carrie; Sinno, Scott; Vaughan, Garrison; Bowen, Michael

    2018-01-01

    This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASAs Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing (HPC) and storage retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics.

  15. Use of Massive Parallel Computing Libraries in the Context of Global Gravity Field Determination from Satellite Data

    NASA Astrophysics Data System (ADS)

    Brockmann, J. M.; Schuh, W.-D.

    2011-07-01

    The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.

  16. Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Christopher H.; Long, Hai; Sides, Scott

    2015-10-15

    Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement ofmore » future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.« less

  17. Petascale computation performance of lightweight multiscale cardiac models using hybrid programming models.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-01-01

    Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

  18. HPC Insights, Fall 2011

    DTIC Science & Technology

    2011-01-01

    Simulating Satellite Tracking Using Parallel Computing By Andrew Lindstrom ,University of Hawaii at Hilo — Mentors: Carl Holmberg, Maui High Performance...RDECOM) and his management team, RDECOM Deputy Director Gary Martin ; ARL Director John Miller; Communications- Electronics Research, Development...Saves Resources By Mike Knowles, ARL DSRC Site Lead, Lockheed Martin mode instead of full power down. The first phase of the EAS effort is an attempt

  19. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2008-01-01

    University) Birds and insects use complex flapping and twisting wing motions to maneuver, hover, avoid obstacles, and maintain or regain their...vehicles for use in sensing, surveillance, and wireless communications. HPC simulations examine plunging, pitching, and twisting motions of aeroelastic...wings, to optimize the amplitudes and frequencies of flapping and twisting motions for the maximum amount of thrust. Several methods of calculation

  20. Using ANSYS Fluent on the Peregrine System | High-Performance Computing |

    Science.gov Websites

    two ways to run ANSYS CFD interactively on NREL HPC systems. When graphics rendering is not a critical when used as above is quite low (e.g., windows take a long time to come up). For small tasks, it may be , go to Category/Connection/SSH, and check off the box "enable compression". When graphics

  1. Influence of casting conditions on durability and structural performance of HPC-AR : optimization of self-consolidating concrete to guarantee homogeneity during casting of long structural elements : final report.

    DOT National Transportation Integrated Search

    2017-05-01

    This report is a summary of the research done on dynamic segregation of self-consolidating concrete (SCC) including the casting of pre-stressed beams at Coreslab Structures. SCC is a highly flowable concrete that spreads into place with little to no ...

  2. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  3. Hemangiopericytoma arising from the wall of the urinary bladder.

    PubMed

    Kibar, Y; Uzar, A I; Erdemir, F; Ozcan, A; Coban, H; Seckin, B

    2006-01-01

    Hemangiopericytoma (HPC) arising from within the urinary bladder is exceptionally rare. A 45-year-old man having the symptoms of left groin pain, vague suprapubic discomfort and frequency was admitted to our clinic. Pelvic tomography revealed a tumor in the bladder wall measuring 4 x 3 cm and was not clearly distinct from the lower abdominal wall. Partial cystectomy was performed and the histopathological examination confirmed the hemangiopericytoma. Three thousand rad exterior beam irradiation was performed after operation. Partial cystectomy and adjuvant radiotherapy may be a simple and effective alternative operation for the patient with HPC.

  4. Hierarchical parallelisation of functional renormalisation group calculations - hp-fRG

    NASA Astrophysics Data System (ADS)

    Rohe, Daniel

    2016-10-01

    The functional renormalisation group (fRG) has evolved into a versatile tool in condensed matter theory for studying important aspects of correlated electron systems. Practical applications of the method often involve a high numerical effort, motivating the question in how far High Performance Computing (HPC) can leverage the approach. In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. This in turn can extend the applicability of the method to otherwise inaccessible cases. We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data). Results are provided for two distinct High Performance Computing (HPC) platforms, namely the IBM-based BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster. We discuss how certain issues and obstacles were overcome in the course of adapting the code. Most importantly, we conclude that this vast improvement can actually be accomplished by introducing only moderate changes to the code, such that this strategy may serve as a guideline for other researcher to likewise improve the efficiency of their codes.

  5. On stress-state optimization in steel-concrete composite structures

    NASA Astrophysics Data System (ADS)

    Brauns, J.; Skadins, U.

    2017-10-01

    The plastic resistance of a concrete-filled column commonly is given as a sum of the components and taking into account the effect of confinement. The stress state in a composite column is determined by taking into account the non-linear relationship of modulus of elasticity and Poisson’s ratio on the stress level in the concrete core. The effect of confinement occurs at a high stress level when structural steel acts in tension and concrete in lateral compression. The stress state of a composite beam is determined taking into account non-linear dependence on the position of neutral axis. In order to improve the stress state of a composite element and increase the safety of the construction the appropriate strength of steel and concrete has to be applied. The safety of high-stressed composite structures can be achieved by using high-performance concrete (HPC). In this study stress analysis of the composite column and beam is performed with the purpose of obtaining the maximum load-bearing capacity and enhance the safety of the structure by using components with the appropriate strength and by taking into account the composite action. The effect of HPC on the stress state and load carrying capacity of composite elements is analysed.

  6. Near Real-Time Probabilistic Damage Diagnosis Using Surrogate Modeling and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Zubair, Mohammad; Ranjan, Desh

    2017-01-01

    This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.

  7. Hippocampal-medial prefrontal circuit supports memory updating during learning and post-encoding rest

    PubMed Central

    Schlichting, Margaret L.; Preston, Alison R.

    2015-01-01

    Learning occurs in the context of existing memories. Encountering new information that relates to prior knowledge may trigger integration, whereby established memories are updated to incorporate new content. Here, we provide a critical test of recent theories suggesting hippocampal (HPC) and medial prefrontal (MPFC) involvement in integration, both during and immediately following encoding. Human participants with established memories for a set of initial (AB) associations underwent fMRI scanning during passive rest and encoding of new related (BC) and unrelated (XY) pairs. We show that HPC-MPFC functional coupling during learning was more predictive of trial-by-trial memory for associations related to prior knowledge relative to unrelated associations. Moreover, the degree to which HPC-MPFC functional coupling was enhanced following overlapping encoding was related to memory integration behavior across participants. We observed a dissociation between anterior and posterior MPFC, with integration signatures during post-encoding rest specifically in the posterior subregion. These results highlight the persistence of integration signatures into post-encoding periods, indicating continued processing of interrelated memories during rest. We also interrogated the coherence of white matter tracts to assess the hypothesis that integration behavior would be related to the integrity of the underlying anatomical pathways. Consistent with our predictions, more coherent HPC-MPFC white matter structure was associated with better performance across participants. This HPC-MPFC circuit also interacted with content-sensitive visual cortex during learning and rest, consistent with reinstatement of prior knowledge to enable updating. These results show that the HPC-MPFC circuit supports on- and offline integration of new content into memory. PMID:26608407

  8. CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.

    2013-12-01

    As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.

  9. Infiltrating sulfur into a highly porous carbon sphere as cathode material for lithium–sulfur batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Xiaohui; Kim, Dul-Sun; Ahn, Hyo-Jun

    2014-10-15

    Highlights: • A highly porous carbon (HPC) with regular spherical morphology was synthesized. • Sulfur/HPC composites were prepared by melt–diffusion method. • Sulfur/HPC composites showed improved cyclablity and long-term cycle life. - Abstract: Sulfur composite material with a highly porous carbon sphere as the conducting container was prepared. The highly porous carbon sphere was easily synthesized with resorcinol–formaldehyde precursor as the carbon source. The morphology of the carbon was observed with field emission scanning electron microscope and transmission electron microscope, which showed a well-defined spherical shape. Brunauer–Emmett–Teller analysis indicated that it possesses a high specific surface area of 1563 m{supmore » 2} g{sup −1} and a total pore volume of 2.66 cm{sup 3} g{sup −1} with a bimodal pore size distribution, which allow high sulfur loading and easy transportation of lithium ions. Sulfur carbon composites with varied sulfur contents were prepared by melt–diffusion method and lithium sulfur cells with the sulfur composites showed improved cyclablity and long-term cycle life.« less

  10. Genetic heterogeneity in Finnish hereditary prostate cancer using ordered subset analysis

    PubMed Central

    Simpson, Claire L; Cropp, Cheryl D; Wahlfors, Tiina; George, Asha; Jones, MaryPat S; Harper, Ursula; Ponciano-Jackson, Damaris; Tammela, Teuvo; Schleutker, Johanna; Bailey-Wilson, Joan E

    2013-01-01

    Prostate cancer (PrCa) is the most common male cancer in developed countries and the second most common cause of cancer death after lung cancer. We recently reported a genome-wide linkage scan in 69 Finnish hereditary PrCa (HPC) families, which replicated the HPC9 locus on 17q21-q22 and identified a locus on 2q37. The aim of this study was to identify and to detect other loci linked to HPC. Here we used ordered subset analysis (OSA), conditioned on nonparametric linkage to these loci to detect other loci linked to HPC in subsets of families, but not the overall sample. We analyzed the families based on their evidence for linkage to chromosome 2, chromosome 17 and a maximum score using the strongest evidence of linkage from either of the two loci. Significant linkage to a 5-cM linkage interval with a peak OSA nonparametric allele-sharing LOD score of 4.876 on Xq26.3-q27 (ΔLOD=3.193, empirical P=0.009) was observed in a subset of 41 families weakly linked to 2q37, overlapping the HPCX1 locus. Two peaks that were novel to the analysis combining linkage evidence from both primary loci were identified; 18q12.1-q12.2 (OSA LOD=2.541, ΔLOD=1.651, P=0.03) and 22q11.1-q11.21 (OSA LOD=2.395, ΔLOD=2.36, P=0.006), which is close to HPC6. Using OSA allows us to find additional loci linked to HPC in subsets of families, and underlines the complex genetic heterogeneity of HPC even in highly aggregated families. PMID:22948022

  11. Equivalent cardioprotection induced by ischemic and hypoxic preconditioning.

    PubMed

    Xiang, Xujin; Lin, Haixia; Liu, Jin; Duan, Zeyan

    2013-04-01

    We aimed to compare cardioprotection induced by various hypoxic preconditioning (HPC) and ischemic preconditioning (IPC) protocols. Isolated rat hearts were randomly divided into 7 groups (n = 7 per group) and received 3 or 5 cycles of 3-minute ischemia or hypoxia followed by 3-minute reperfusion (IPC33 or HPC33 or IPC53 or HPC53 group), 3 cycles of 5-minute ischemia or hypoxia followed by 5-minute reperfusion (IPC35 group or HPC35 group), or 30-minute perfusion (ischemic/reperfusion group), respectively. Then all the hearts were subjected to 50-minute ischemia and 120-minute reperfusion. Cardiac function, infarct size, and coronary flow rate (CFR) were evaluated. Recovery of cardiac function and CFR in IPC35, HPC35, and HPC53 groups was significantly improved as compared with I/R group (p < 0.01). There were no significant differences in cardiac function parameters between IPC35 and HPC35 groups. Consistently, infarct size was significantly reduced in IPC35, HPC35, and HPC53 groups compared with ischemic/reperfusion group. Multiple-cycle short duration HPC exerted cardioprotection, which was as powerful as that of IPC. Georg Thieme Verlag KG Stuttgart · New York.

  12. A population-based analysis of Head and Neck hemangiopericytoma.

    PubMed

    Shaigany, Kevin; Fang, Christina H; Patel, Tapan D; Park, Richard Chan; Baredes, Soly; Eloy, Jean Anderson

    2016-03-01

    Hemangiopericytomas (HPC) are tumors that arise from pericytes. Hemangiopericytomas of the head and neck are rare and occur both extracranially and intracranially. This study analyzes the demographic, clinicopathologic, treatment modalities, and survival characteristics of extracranial head and neck hemangiopericytomas (HN-HPC) and compares them to HPCs at other body sites (Other-HPC). The Surveillance, Epidemiology, and End Results (SEER) database (1973-2012) was queried for HN-HPC (121 cases) and Other-HPC (510 cases). Data were analyzed comparatively with respect to various demographic and clinicopathologic factors. Disease-specific survival (DSS) was analyzed using the Kaplan-Meier model. There was no significant difference in age at time of diagnosis between HN-HPC and Other-HPC. Head and neck HPC was most commonly located in the connective and soft tissue (18.4%), followed by the nasal cavity and paranasal sinuses (8.5%). Head and neck HPCs were smaller than Other-HPC (P < 0.0001) and more likely to be a lower histologic grade (P < 0.0097). The primary treatment modality for HN-HPC was surgery alone, used in 55.8% of cases. The 5-, 10-, and 20-year DSS for HN-HPC were 84.0%, 79.4%, and 69.4%, respectfully. Higher histologic grade and the presence of distant metastases were poor prognostic factors for HN-HPC. Head and neck HPCs are rare tumors. This study represents the largest series of HN-HPCs to date. Surgery alone is the primary treatment modality for HN-HPC, with a favorable prognosis. Adjuvant radiotherapy does not appear to confer a survival benefit for any body site. 4. Laryngoscope, 126:643-650, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Investigation of vasculogenic mimicry in intracranial hemangiopericytoma.

    PubMed

    Zhang, Zhen; Han, Yun; Zhang, Keke; Teng, Liangzhu

    2011-01-01

    Vasculogenic mimicry (VM) has increasingly been recognized as a form of angiogenesis. Previous studies have shown that the existence of VM is associated with poor clinical prognosis in certain malignant tumors. However, whether VM is present and clinically significant in intracranial hemangiopericytoma (HPC) is unknown. The present study was therefore designed to examine the expression of VM in intracranial HPC and its correlation with matrix metalloprotease-2 (MMP-2) and vascular endothelial growth factor (VEGF). A total of 17 intracranial HPC samples, along with complete clinical and pathological data, were collected for our study. Immunohistochemistry was performed to stain tissue sections for CD34, periodic acid-Schiff, VEGF and MMP-2. The levels of VEGF and MMP-2 were compared between tumor samples with and without VM. The results showed that VM existed in 12 of 17 (70.6%) intracranial HPC samples. The presence of VM in tumors was associated with tumor recurrence (P<0.05) and expression of MMP-2 (P<0.05). However, there was no difference in the expression of VEGF between groups with and without VM.

  14. HPC-Microgels: New Look at Structure and Dynamics

    NASA Astrophysics Data System (ADS)

    McKenna, John; Streletzky, Kiril; Mohieddine, Rami

    2006-10-01

    Issues remain unresolved in targeted chemotherapy including: an inability to effectively target cancerous tissue, the loss of low molecular weight medicines to the RES system, the high cytotoxicity of currently used drug carriers, and the inability to control the release of medicines upon arrival to the target. Hydroxy-propyl cellulose(HPC) microgels may be able to surmount these obstacles. HPC is a high molecular weight polymer with low cytotoxicity and a critical temperature around 41C. We cross-linked HPC polymer chains to produce microgel nanoparticles and studied their structure and dynamics using Dynamic Light Scattering spectroscopy. The complex nature of the fluid and large size distribution of the particles renders typical characterization algorithm CONTIN ineffective and inconsistent. Instead, the particles spectra have been fit to a sum of stretched exponentials. Each term offers three parameters for analysis and represents a single mode. The results of this analysis show that the microgels undergo a multi to uni-modal transition around 41C. The CONTIN size distribution analysis shows similar results, but these come with much less consistency and resolution. During the phase transition it is found that the microgel particles actually shrink. This property might be particularly useful for controlled drug delivery and release.

  15. Experience in Reconstructing the PT-60-90 Turbine by Reconditioning Heat Treatment of the High-Pressure Cylinder Shell

    NASA Astrophysics Data System (ADS)

    Ermolaev, V. V.; Zhuchenko, L. A.; Lyubimov, A. A.; Gladshtein, V. I.; Kremer, V. L.

    2018-06-01

    Experience in reconstructing the PT-60-90 turbine at Salavatskaya CHPP upon the operation for more than 350000 h is described. In the course of reconstruction, the life of the turbine was restored, its economic efficiency was increased, process extraction of 1.27-1.57 MPa was changed to uncontrolled extraction, and additional extraction of 3.43 MPa was arranged. The high-pressure cylinder (HPC) shell was restored by reconditioning heat treatment (RHT), and the rotor was replaced by a new modernized one. To select the optimal conditions of the reconditioning heat treatment of the HPC shell (of the PT-60-90 turbine) manufactured from 20CrMoPL grade steel, the results of previously conducted tests of the shell metal of the same grade were integrated. The heat treatment was carried out on modernized furnace equipment using means of and methods for controlling the temperature and heating and cooling rates. Detailed nondestructive inspection of the upper and lower HPC halves was performed. The locations, distribution, sizes, and types of the defects were identified. The detected defects and austenitic build-ups were removed, welded with pearlite electrodes, examined, and subjected to heat treatment (tempering). The actual heat treatment conditions were analyzed and, based on the obtained data on the mechanical properties of the metal, the tempering temperature and time were specified. Complete investigation of the metal of both HPC halves was conducted prior to the reconditioning heat treatment. The reliability of the metal of the cylinder shell after RHT was evaluated by the mechanical properties, such as tensile strength, critical ductile-to-brittle transition temperature (crack resistance), and stress-rupture strength. It was established that, after RHT, the characteristics of the metal, such as yield strength, ultimate strength, elongation per unit length, contraction ratio, hardness, and impact toughness, significantly improved and, on the whole, the quality of the metal met the requirements of the normative documentation for newly manufactured castings. The heat resistance of the metal of the cylinder shell after RHT also increased, which can ensure the operation of the HPC shell for more than 200 000 h provided that the recommendations for regular inspections of its condition are followed.

  16. Activity of the anterior cingulate cortex and ventral hippocampus underlie increases in contextual fear generalization.

    PubMed

    Cullen, Patrick K; Gilman, T Lee; Winiecki, Patrick; Riccio, David C; Jasnow, Aaron M

    2015-10-01

    Memories for context become less specific with time resulting in animals generalizing fear from training contexts to novel contexts. Though much attention has been given to the neural structures that underlie the long-term consolidation of a context fear memory, very little is known about the mechanisms responsible for the increase in fear generalization that occurs as the memory ages. Here, we examine the neural pattern of activation underlying the expression of a generalized context fear memory in male C57BL/6J mice. Animals were context fear conditioned and tested for fear in either the training context or a novel context at recent and remote time points. Animals were sacrificed and fluorescent in situ hybridization was performed to assay neural activation. Our results demonstrate activity of the prelimbic, infralimbic, and anterior cingulate (ACC) cortices as well as the ventral hippocampus (vHPC) underlie expression of a generalized fear memory. To verify the involvement of the ACC and vHPC in the expression of a generalized fear memory, animals were context fear conditioned and infused with 4% lidocaine into the ACC, dHPC, or vHPC prior to retrieval to temporarily inactivate these structures. The results demonstrate that activity of the ACC and vHPC is required for the expression of a generalized fear memory, as inactivation of these regions returned the memory to a contextually precise form. Current theories of time-dependent generalization of contextual memories do not predict involvement of the vHPC. Our data suggest a novel role of this region in generalized memory, which should be incorporated into current theories of time-dependent memory generalization. We also show that the dorsal hippocampus plays a prolonged role in contextually precise memories. Our findings suggest a possible interaction between the ACC and vHPC controls the expression of fear generalization. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Computational challenges in atomic, molecular and optical physics.

    PubMed

    Taylor, Kenneth T

    2002-06-15

    Six challenges are discussed. These are the laser-driven helium atom; the laser-driven hydrogen molecule and hydrogen molecular ion; electron scattering (with ionization) from one-electron atoms; the vibrational and rotational structure of molecules such as H(3)(+) and water at their dissociation limits; laser-heated clusters; and quantum degeneracy and Bose-Einstein condensation. The first four concern fundamental few-body systems where use of high-performance computing (HPC) is currently making possible accurate modelling from first principles. This leads to reliable predictions and support for laboratory experiment as well as true understanding of the dynamics. Important aspects of these challenges addressable only via a terascale facility are set out. Such a facility makes the last two challenges in the above list meaningfully accessible for the first time, and the scientific interest together with the prospective role for HPC in these is emphasized.

  18. Constructive Engineering of Simulations

    NASA Technical Reports Server (NTRS)

    Snyder, Daniel R.; Barsness, Brendan

    2011-01-01

    Joint experimentation that investigates sensor optimization, re-tasking and management has far reaching implications for Department of Defense, Interagency and multinational partners. An adaption of traditional human in the loop (HITL) Modeling and Simulation (M&S) was one approach used to generate the findings necessary to derive and support these implications. Here an entity-based simulation was re-engineered to run on USJFCOM's High Performance Computer (HPC). The HPC was used to support the vast number of constructive runs necessary to produce statistically significant data in a timely manner. Then from the resulting sensitivity analysis, event designers blended the necessary visualization and decision making components into a synthetic environment for the HITL simulations trials. These trials focused on areas where human decision making had the greatest impact on the sensor investigations. Thus, this paper discusses how re-engineering existing M&S for constructive applications can positively influence the design of an associated HITL experiment.

  19. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  20. Experience Paper: Software Engineering and Community Codes Track in ATPESC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Anshu; Riley, Katherine M.

    Argonne Training Program in Extreme Scale Computing (ATPESC) was started by the Argonne National Laboratory with the objective of expanding the ranks of better prepared users of high performance computing (HPC) machines. One of the unique aspects of the program was inclusion of software engineering and community codes track. The inclusion was motivated by the observation that the projects with a good scientific and software process were better able to meet their scientific goals. In this paper we present our experience of running the software track from the beginning of the program until now. We discuss the motivations, the reception,more » and the evolution of the track over the years. We welcome discussion and input from the community to enhance the track in ATPESC, and also to facilitate inclusion of similar tracks in other HPC oriented training programs.« less

  1. The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath

    PubMed Central

    Ellisman, M.; Hutton, T.; Kirkland, A.; Lin, A.; Lin, C.; Molina, T.; Peltier, S.; Singh, R.; Tang, K.; Trefethen, A.E.; Wallom, D.C.H.; Xiong, X.

    2009-01-01

    The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients. PMID:19487201

  2. The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath.

    PubMed

    Ellisman, M; Hutton, T; Kirkland, A; Lin, A; Lin, C; Molina, T; Peltier, S; Singh, R; Tang, K; Trefethen, A E; Wallom, D C H; Xiong, X

    2009-07-13

    The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients.

  3. Hydrogen postconditioning promotes survival of rat retinal ganglion cells against ischemia/reperfusion injury through the PI3K/Akt pathway.

    PubMed

    Wu, Jiangchun; Wang, Ruobing; Yang, Dianxu; Tang, Wenbin; Chen, Zeli; Sun, Qinglei; Liu, Lin; Zang, Rongyu

    2018-01-22

    Retinal ischemia/reperfusion injury (IRI) plays a crucial role in the pathophysiology of various ocular diseases. Our previous study have shown that postconditioning with inhaled hydrogen (H 2 ) (HPC) can protect retinal ganglion cells (RGCs) in a rat model of retinal IRI. Our further study aims to investigate potential mechanisms underlying HPC-induced protection. Retinal IRI was performed on the right eyes of rats and was followed by inhalation of 67% H 2 mixed with 33% oxygen immediately after ischemia for 1 h daily for one week. RGC density was counted using haematoxylin and eosin (HE) staining, retrograde labelling with cholera toxin beta (CTB) and TUNEL staining, respectively. Visual function was assessed using flash visual evoked potentials (FVEP) and pupillary light reflex (PLR). The phosphorylated Akt was analysed by RT-PCR and western blot. The results showed that administration of HPC significantly inhibited the apoptosis of RGCs and protected the visual function. Simultaneously, HPC treatment markedly increased the phosphorylations of Akt. Blockade of PI3K activity by inhibitors (LY294002) dramatically abolished its anti-apoptotic effect and lowered both visual function and Akt phosphorylation levels. Taken together, our results demonstrate that HPC appears to confer neuroprotection against retinal IRI via the PI3K/Akt pathway. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  5. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  6. BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations

    NASA Astrophysics Data System (ADS)

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos

    2017-12-01

    Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.

  7. BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.

    PubMed

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos

    2017-12-01

    The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.

  8. Establishing Linux Clusters for High-Performance Computing (HPC) at NPS

    DTIC Science & Technology

    2004-09-01

    52 e. Intel Roll..................................................................................53 f. Area51 Roll...results of generating md5summ for Area51 roll. All the file information is available. This number can be used to be checked against the number that the...vendor provides fro the particular piece of software. ......51 Figure 22 The given md5summ for Area51 roll form the download site. This number can

  9. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 3, Issue 1

    DTIC Science & Technology

    2011-01-01

    release; distribution is unlimited. Multiscale Modeling of Materials The rotating reflector antenna associated with airport traffic control systems is...batteries and phased-array antennas . Power and efficiency studies evaluate on-board HPC systems and advanced image processing applications. 2010 marked...giving way in some applications to a newer technology called the phased array antenna system (sometimes called a beamformer, example shown at right

  10. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    DTIC Science & Technology

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  11. Continuous Security and Configuration Monitoring of HPC Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Lomeli, H. D.; Bertsch, A. D.; Fox, D. M.

    Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration managementmore » systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking can be managed from one central location.« less

  12. The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Barry Y.

    The proliferation of inexpensive sensor technologies like the ubiquitous digital image sensors has resulted in the collection and sharing of vast amounts of unsorted and unexploited raw data. Companies and governments who are able to collect and make sense of large datasets to help them make better decisions more rapidly will have a competitive advantage in the information era. Machine Learning technologies play a critical role for automating the data understanding process; however, to be maximally effective, useful intermediate representations of the data are required. These representations or “features” are transformations of the raw data into a form where patternsmore » are more easily recognized. Recent breakthroughs in Deep Learning have made it possible to learn these features from large amounts of labeled data. The focus of this project is to develop and extend Deep Learning algorithms for learning features from vast amounts of unlabeled data and to develop the HPC neural network training platform to support the training of massive network models. This LDRD project succeeded in developing new unsupervised feature learning algorithms for images and video and created a scalable neural network training toolkit for HPC. Additionally, this LDRD helped create the world’s largest freely-available image and video dataset supporting open multimedia research and used this dataset for training our deep neural networks. This research helped LLNL capture several work-for-others (WFO) projects, attract new talent, and establish collaborations with leading academic and commercial partners. Finally, this project demonstrated the successful training of the largest unsupervised image neural network using HPC resources and helped establish LLNL leadership at the intersection of Machine Learning and HPC research.« less

  13. Mini-Ckpts: Surviving OS Failures in Persistent Memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiala, David; Mueller, Frank; Ferreira, Kurt Brian

    Concern is growing in the high-performance computing (HPC) community on the reliability of future extreme-scale systems. Current efforts have focused on application fault-tolerance rather than the operating system (OS), despite the fact that recent studies have suggested that failures in OS memory are more likely. The OS is critical to a system's correct and efficient operation of the node and processes it governs -- and in HPC also for any other nodes a parallelized application runs on and communicates with: Any single node failure generally forces all processes of this application to terminate due to tight communication in HPC. Therefore,more » the OS itself must be capable of tolerating failures. In this work, we introduce mini-ckpts, a framework which enables application survival despite the occurrence of a fatal OS failure or crash. Mini-ckpts achieves this tolerance by ensuring that the critical data describing a process is preserved in persistent memory prior to the failure. Following the failure, the OS is rejuvenated via a warm reboot and the application continues execution effectively making the failure and restart transparent. The mini-ckpts rejuvenation and recovery process is measured to take between three to six seconds and has a failure-free overhead of between 3-5% for a number of key HPC workloads. In contrast to current fault-tolerance methods, this work ensures that the operating and runtime system can continue in the presence of faults. This is a much finer-grained and dynamic method of fault-tolerance than the current, coarse-grained, application-centric methods. Handling faults at this level has the potential to greatly reduce overheads and enables mitigation of additional fault scenarios.« less

  14. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by today’s high-­end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-­performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-­fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-­Productivity Supercomputing (VI-­HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-­HPS training activities together within the past three years.« less

  15. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensivelymore » across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.« less

  16. "Cor Occidere": a novel strategy of targeting the tumor core by radiosurgery in a radio- and chemo-resistant intracranial hemangiopericytoma.

    PubMed

    Li, You Quan; Chua, Eu Tiong; Chua, Kevin L M; Chua, Melvin L K

    2018-02-01

    Intracranial hemangiopericytomas (HPC) are chemotherapy- and radiotherapy (RT)-resistant. Here, we report on a novel stereotactic radiosurgery (SRS) technique-"Cor Occidere" (Latin), as a potential strategy of overcoming radioresistance of HPC. A 36-year old female presented to our clinic for consideration of a 3rd-course of RT for her recurrent cavernous sinus HPC, following previous cranial RT at 13 and 5 years prior, and a failed 9 months trial of bevacizumab/temozolomide. The tumor-adjacent brain stem and carotid artery risked substantial damage given the cumulative RT doses to these organs. We therefore designed an SRS plan targeting only the tumor core with 16 Gy single-fraction. Despite underdosing the tumor margin, we achieved stable disease over 25 months, contrasting her responses to systemic therapies. Achieving tumor control despite a suboptimal treatment that utilized high dose ablation of the tumor core suggests novel biological mechanisms to overcome radioresistance of HPC.

  17. Impact of the early detection of esophageal neoplasms in hypopharyngeal cancer patients treated with concurrent chemoradiotherapy.

    PubMed

    Watanabe, Shigenobu; Ogino, Ichiro; Inayama, Yoshiaki; Sugiura, Madoka; Sakuma, Yasunori; Kokawa, Atsushi; Kunisaki, Chikara; Inoue, Tomio

    2017-04-01

    We examined the risk factors and prognostic factors for synchronous esophageal neoplasia (SEN) by comparing the characteristics of hypopharyngeal cancer (HPC) patients with and without SEN. We examined 183 patients who were treated with definitive radiotherapy for HPC. Lugol chromoendoscopy screening of the esophagus was performed in all patients before chemoradiotherapy. Thirty-six patients had SEN, 49 patients died of HPC and two died of esophageal cancer. The patients with SEN exhibited significantly higher alcohol consumption than those without SEN (P = 0.018). The 5-year overall survival (OS) rate of the 36 patients with SEN was lower than that of the other patients (36.2% vs 63.4%, P = 0.006). The SEN patients exhibited significantly shorter HPC cause-specific survival than the other patients (P = 0.039). Both the OS (P = 0.005) and the HPC cause-specific survival (P = 0.026) of the patients with SEN were significantly shorter than those of the patients without SEN in multivariate analysis. Category 4/T1 stage esophageal cancer was treated with concurrent chemoradiotherapy (CCRT), endoscopic treatment or chemotherapy. The 5-year survival rates for esophageal cancer recurrence for CCRT, endoscopic treatment and chemotherapy were 71.5, 43.7 and 0%, respectively. The median (range) survival time (months) of CCRT, endoscopic treatment and chemotherapy was 22.7 (7.5-90.6), 46.44 (17.3-136.7) and 7.98 (3.72-22.8), respectively. Advanced HPC patients with SEN might have a poorer prognosis than those without SEN even when the esophageal cancer is detected early and managed appropriately. © 2014 Wiley Publishing Asia Pty Ltd.

  18. High-energy supercapacitors based on hierarchical porous carbon with an ultrahigh ion-accessible surface area in ionic liquid electrolytes.

    PubMed

    Zhong, Hui; Xu, Fei; Li, Zenghui; Fu, Ruowen; Wu, Dingcai

    2013-06-07

    A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m(2) g(-1) shows an extremely high energy density, i.e., 118 W h kg(-1) at a power density of 100 W kg(-1). This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer.

  19. An asynchronous traversal engine for graph-based rich metadata management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Carns, Philip; Ross, Robert B.

    Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less

  20. An asynchronous traversal engine for graph-based rich metadata management

    DOE PAGES

    Dai, Dong; Carns, Philip; Ross, Robert B.; ...

    2016-06-23

    Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less

  1. Analysis of hematopoietic recovery after autologous transplantation as method of quality control for long-term progenitor cell cryopreservation.

    PubMed

    Pavlů, J; Auner, H W; Szydlo, R M; Sevillano, B; Palani, R; O'Boyle, F; Chaidos, A; Jakob, C; Kanfer, E; MacDonald, D; Milojkovic, D; Rahemtulla, A; Bradshaw, A; Olavarria, E; Apperley, J F; Pello, O M

    2017-12-01

    Hematopoietic precursor cells (HPC) are able to restore hematopoiesis after high-dose chemotherapy and their cryopreservation is routinely employed prior to the autologous hematopoietic cell transplantation (AHCT). Although previous studies showed feasibility of long-term HPC storage, concerns remain about possible negative effects on their potency. To study the effects of long-term cryopreservation, we compared time to neutrophil and platelet recovery in 50 patients receiving two AHCT for multiple myeloma at least 2 years apart between 2006 and 2016, using HPC obtained from one mobilization and collection attempt before the first transplant. This product was divided into equivalent fractions allowing a minimum of 2 × 10 6 CD34+ cells/kg recipient's weight. One fraction was used for the first transplant after median storage of 60 days (range, 17-165) and another fraction was used after median storage of 1448 days (range, 849-3510) at the second AHCT. Neutrophil recovery occurred at 14 days (median; range, 11-21) after the first and 13 days (10-20) after the second AHCT. Platelets recovered at a median of 16 days after both procedures. Considering other factors, such as disease status, conditioning and HPC dose, this single institution data demonstrated no reduction in the potency of HPC after long-term storage.

  2. The role of the hippocampus in approach-avoidance conflict decision-making: Evidence from rodent and human studies.

    PubMed

    Ito, Rutsuko; Lee, Andy C H

    2016-10-15

    The hippocampus (HPC) has been traditionally considered to subserve mnemonic processing and spatial cognition. Over the past decade, however, there has been increasing interest in its contributions to processes beyond these two domains. One question is whether the HPC plays an important role in decision-making under conditions of high approach-avoidance conflict, a scenario that arises when a goal stimulus is simultaneously associated with reward and punishment. This idea has its origins in rodent work conducted in the 1950s and 1960s, and has recently experienced a resurgence of interest in the literature. In this review, we will first provide an overview of classic rodent lesion data that first suggested a role for the HPC in approach-avoidance conflict processing and then proceed to describe a wide range of more recent evidence from studies conducted in rodents and humans. We will demonstrate that there is substantial, converging cross-species evidence to support the idea that the HPC, in particular the ventral (in rodents)/anterior (in humans) portion, contributes to approach-avoidance conflict decision making. Furthermore, we suggest that the seemingly disparate functions of the HPC (e.g. memory, spatial cognition, conflict processing) need not be mutually exclusive. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Managing Scientific Software Complexity with Bocca and CCA

    DOE PAGES

    Allan, Benjamin A.; Norris, Boyana; Elwasif, Wael R.; ...

    2008-01-01

    In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enablemore » application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC) applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.

    The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less

  5. DoD HPC Insights Fall 2016A publication of the Department of Defense High Performance Computing Modernization Program

    DTIC Science & Technology

    2016-09-01

    HPCMP will continue to be a key resource in solving challenging problems for the Department of Defense . 1 Fall 2016 High-F idel i ty Simulat ions of...laser interactions. The group had studied plasma expansion experimentally, but this wasn’t sufficient to understand the problem . Feister adapted and...focused on increasing the efficiency of jet turbine engines and extending aircraft flight ranges by changing the shape (articulation) of the turbine

  6. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Voice Call Analysis

    DTIC Science & Technology

    2015-09-01

    Gateway 2 4. Voice Packet Flow: SIP , Session Description Protocol (SDP), and RTP 3 5. Voice Data Analysis 5 6. Call Analysis 6 7. Call Metrics 6...analysis processing is designed for a general VoIP system architecture based on Session Initiation Protocol ( SIP ) for negotiating call sessions and...employs Skinny Client Control Protocol for network communication between the phone and the local CallManager (e.g., for each dialed digit), SIP

  7. Proteomic analysis of cPKCβII-interacting proteins involved in HPC-induced neuroprotection against cerebral ischemia of mice.

    PubMed

    Bu, Xiangning; Zhang, Nan; Yang, Xuan; Liu, Yanyan; Du, Jianli; Liang, Jing; Xu, Qunyuan; Li, Junfa

    2011-04-01

    Hypoxic preconditioning (HPC) initiates intracellular signaling pathway to provide protection against subsequent cerebral ischemic injuries, and its mechanism may provide molecular targets for therapy in stroke. According to our study of conventional protein kinase C βII (cPKCβII) activation in HPC, the role of cPKCβII in HPC-induced neuroprotection and its interacting proteins were determined in this study. The autohypoxia-induced HPC and middle cerebral artery occlusion (MCAO)-induced cerebral ischemia mouse models were prepared as reported. We found that HPC reduced 6 h MCAO-induced neurological deficits, infarct volume, edema ratio and cell apoptosis in peri-infarct region (penumbra), but cPKCβII inhibitors Go6983 and LY333531 blocked HPC-induced neuroprotection. Proteomic analysis revealed that the expression of four proteins in cytosol and eight proteins in particulate fraction changed significantly among 49 identified cPKCβII-interacting proteins in cortex of HPC mice. In addition, HPC could inhibit the decrease of phosphorylated collapsin response mediator protein-2 (CRMP-2) level and increase of CRMP-2 breakdown product. TAT-CRMP-2 peptide, which prevents the cleavage of endogenous CRMP-2, could inhibit CRMP-2 dephosphorylation and proteolysis as well as the infarct volume of 6 h MCAO mice. This study is the first to report multiple cPKCβII-interacting proteins in HPC mouse brain and the role of cPKCβII-CRMP-2 in HPC-induced neuroprotection against early stages of ischemic injuries in mice. © 2011 The Authors. Journal of Neurochemistry © 2011 International Society for Neurochemistry.

  8. UHPC for Blast and Ballistic Protection, Explosion Testing and Composition Optimization

    NASA Astrophysics Data System (ADS)

    Bibora, P.; Drdlová, M.; Prachař, V.; Sviták, O.

    2017-10-01

    The realization of high performance concrete resistant to detonation is the aim and expected outcome of the presented project, which is oriented to development of construction materials for larger objects as protective walls and bunkers. Use of high-strength concrete (HSC / HPC - “high strength / performance concrete”) and high-fiber reinforced concrete (UHPC / UHPFC -“Ultra High Performance Fiber Reinforced Concrete”) seems to be optimal for this purpose of research. The paper describes the research phase of the project, in which we focused on the selection of specific raw materials and chemical additives, including determining the most suitable type and amount of distributed fiber reinforcement. Composition of UHPC was optimized during laboratory manufacture of test specimens to obtain the best desired physical- mechanical properties of developed high performance concretes. In connection with laboratory testing, explosion field tests of UHPC specimens were performed and explosion resistance of laboratory produced UHPC testing boards was investigated.

  9. DOD HPC Insights. Spring 2012

    DTIC Science & Technology

    2012-04-01

    spr2012-Final.indd 1 5/9/2012 10:54:32 AM DOD Spring 2012 INSIGHTS A publication of the Department of Defense High Performance Computing...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE 2012 2. REPORT TYPE 3. DATES COVERED 4. TITLE

  10. Rebuilding the NAVSEA Early Stage Ship Design Environment

    DTIC Science & Technology

    2010-04-01

    rules -of- thumb to base these crucial decisions upon. With High Performance Computing (HPC) as an enabler, the vision is to explore all downstream...the results of the analysis back into LEAPS. Another software development worthy of discussion here is Intelligent Ship Arrangements ( ISA ), which...constraints and rules set by the users ahead of time. When used in a systematic and stochastic way, and when integrated using LEAPS, having this

  11. A Heterogeneous High-Performance System for Computational and Computer Science

    DTIC Science & Technology

    2016-11-15

    Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under Graduate students supported...team of research faculty from the departments of computer science and natural science at Bowie State University. The supercomputer is not only to...accelerated HPC systems. The supercomputer is also ideal for the research conducted in the Department of Natural Science, as research faculty work on

  12. Visualization Development of the Ballistic Threat Geospatial Optimization

    DTIC Science & Technology

    2015-07-01

    topographic globes, Keyhole Markup Language (KML), and Collada files. World Wind gives the user the ability to import 3-D models and navigate...present. After the first person view window is closed , the images stored in memory are then converted to a QuickTime movie (.MOV). The video will be...processing unit HPC high-performance computing JOGL Java implementation of OpenGL KML Keyhole Markup Language NASA National Aeronautics and Space

  13. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  14. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    PubMed

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  15. HPC AND GRID COMPUTING FOR INTEGRATIVE BIOMEDICAL RESEARCH

    PubMed Central

    Kurc, Tahsin; Hastings, Shannon; Kumar, Vijay; Langella, Stephen; Sharma, Ashish; Pan, Tony; Oster, Scott; Ervin, David; Permar, Justin; Narayanan, Sivaramakrishnan; Gil, Yolanda; Deelman, Ewa; Hall, Mary; Saltz, Joel

    2010-01-01

    Integrative biomedical research projects query, analyze, and integrate many different data types and make use of datasets obtained from measurements or simulations of structure and function at multiple biological scales. With the increasing availability of high-throughput and high-resolution instruments, the integrative biomedical research imposes many challenging requirements on software middleware systems. In this paper, we look at some of these requirements using example research pattern templates. We then discuss how middleware systems, which incorporate Grid and high-performance computing, could be employed to address the requirements. PMID:20107625

  16. 75 FR 59067 - Airworthiness Directives; International Aero Engines AG V2500-A1, V2522-A5, V2524-A5, V2525-D5...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-27

    ... plated nuts attaching the HPC stage 3 to 8 drum to the HPC stage 9 to 12 drum, removal of silver residue... plated nuts attaching the HPC stage 3 to 8 drum to the HPC stage 9 to 12 drum, removal of silver residue... AD, removal from service of the fully silver plated nuts attaching the HPC stage 3 to 8 drum to the...

  17. CI-WATER HPC Model: Cyberinfrastructure to Advance High Performance Water Resources Modeling in the Intermountain Western U.S

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Lai, W.; Douglas, C. C.; Miller, S. N.; Zhang, Y.

    2012-12-01

    The CI-WATER project is a cooperative effort between the Utah and Wyoming EPSCoR jurisdictions, and is funded through a cooperative agreement with the U.S. National Science Foundation EPSCoR. The CI-WATER project is acquiring hardware and developing software cyberinfrastructure (CI) to enhance accessibility of High Performance Computing for water resources modeling in the Western U.S. One of the components of the project is development of a large-scale, high-resolution, physically-based, data-driven, integrated computational water resources model, which we call the CI-WATER HPC model. The objective of this model development is to enable evaluation of integrated system behavior to guide and support water system planning and management by individual users, cities, or states. The model is first being tested in the Green River basin of Wyoming, which is the largest tributary to the Colorado River. The model will ultimately be applied to simulate the entire Upper Colorado River basin for hydrological studies, watershed management, economic analysis, as well as evaluation of potential changes in environmental policy and law, population, land use, and climate. In addition to hydrologically important processes simulated in many hydrological models, the CI-WATER HPC model will emphasize anthropogenic influences such as land use change, water resources infrastructure, irrigation practices, trans-basin diversions, and urban/suburban development. The model operates on an unstructured mesh, employing adaptive mesh at grid sizes as small as 10 m as needed- particularly in high elevation snow melt regions. Data for the model are derived from remote sensing sources, atmospheric models and geophysical techniques. Monte-Carlo techniques and ensemble Kalman filtering methodologies are employed for data assimilation. The model includes application programming interface (API) standards to allow easy substitution of alternative process-level simulation routines, and provide post-processing, visualization, and communication of massive amounts of output. The open-source CI-WATER model represents a significant advance in water resources modeling, and will be useful to water managers, planners, resource economists, and the hydrologic research community in general.

  18. Human Adipose-derived Stem Cells Ameliorate Cigarette Smoke-induced Murine Myelosuppression via TSG-6

    PubMed Central

    Xie, Jie; Broxmeyer, Hal E.; Feng, Dongni; Schweitzer, Kelly S.; Yi, Ru; Cook, Todd G.; Chitteti, Brahmananda R.; Barwinska, Daria; Traktuev, Dmitry O.; Van Demark, Mary J.; Justice, Matthew J.; Ou, Xuan; Srour, Edward F.; Prockop, Darwin J.; Petrache, Irina; March, Keith L.

    2015-01-01

    Objective Bone marrow-derived hematopoietic stem and progenitor cells (HSC/HPC) are critical to homeostasis and tissue repair. The aims of this study were to delineate the myelotoxicity of cigarette smoking (CS) in a murine model, to explore human adipose-derived stem cells (hASC) as a novel approach to mitigate this toxicity, and to identify key mediating factors for ASC activities. Methods C57BL/6 mice were exposed to CS with or without i.v. injection of regular or siRNA-transfected hASC. For in vitro experiments, cigarette smoke extract (CSE) was used to mimic the toxicity of CS exposure. Analysis of bone marrow hematopoietic progenitor cells (HPC) were performed both by flow cytometry and colony forming unit assays. Results In this study, we demonstrate that as few as three days of CS exposure result in marked cycling arrest and diminished clonogenic capacity of HPC, followed by depletion of phenotypically-defined HSC/HPC. Intravenous injection of hASC substantially ameliorated both acute and chronic CS-induced myelosuppression. This effect was specifically dependent on the anti-inflammatory factor TSG-6, which is induced from xenografted hASC, primarily located in the lung and capable of responding to host inflammatory signals. Gene expression analysis within bone marrow HSC/HPC revealed several specific signaling molecules altered by CS and normalized by hASC. Conclusion Our results suggest that systemic administration of hASC or TSG-6 may be novel approaches to reverse cigarette smoking-induced myelosuppression. PMID:25329668

  19. Development of Evaluation Indicators for Hospice and Palliative Care Professionals Training Programs in Korea.

    PubMed

    Kang, Jina; Park, Kyoung-Ok

    2017-01-01

    The importance of training for Hospice and Palliative Care (HPC) professionals has been increasing with the systemization of HPC in Korea. Hence, the need and importance of training quality for HPC professionals are growing. This study evaluated the construct validity and reliability of the Evaluation Indicators for standard Hospice and Palliative Care Training (EIHPCT) program. As a framework to develop evaluation indicators, an invented theoretical model combining Stufflebeam's CIPP (Context-Input-Process-Product) evaluation model with PRECEDE-PROCEED model was used. To verify the construct validity of the EIHPCT program, a structured survey was performed with 169 professionals who were the HPC training program administrators, trainers, and trainees. To examine the validity of the areas of the EIHPCT program, exploratory factor analysis and confirmatory factor analysis were conducted. First, in the exploratory factor analysis, the indicators with factor loadings above 0.4 were chosen as desirable items, and some cross-loaded items that loaded at 0.4 or higher on two or more factors were adjusted as the higher factor. Second, the model fit of the modified EIHPCT program was quite good in the confirmatory factor analysis (Goodness-of-Fit Index > 0.70, Comparative Fit Index > 0.80, Normed Fit Index > 0.80, Root Mean square of Residuals < 0.05). The modified model of the EIHPCT comprised 4 areas, 13 subdomains, and 61 indicators. The evaluation indicators of the modified model will be valuable references for improving the HPC professional training program.

  20. Hippocampal damage causes retrograde but not anterograde memory loss for context fear discrimination in rats.

    PubMed

    Lee, Justin Q; Sutherland, Robert J; McDonald, Robert J

    2017-09-01

    There is a substantial body of evidence that the hippocampus (HPC) plays and essential role in context discrimination in rodents. Studies reporting anterograde amnesia (AA) used repeated, alternating, distributed conditioning and extinction sessions to measure context fear discrimination. In addition, there is uncertainty about the extent of damage to the HPC. Here, we induced conditioned fear prior to discrimination tests and rats sustained extensive, quantified pre- or post-training HPC damage. Unlike previous work, we found that extensive HPC damage spares context discrimination, we observed no AA. There must be a non-HPC system that can acquire long-term memories that support context fear discrimination. Post-training HPC damage caused retrograde amnesia (RA) for context discrimination, even when rats are fear conditioned for multiple sessions. We discuss the implications of these findings for understanding the role of HPC in long-term memory. © 2017 Wiley Periodicals, Inc.

  1. HPC Software Stack Testing Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garvey, Cormac

    The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).

  2. Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing

    NASA Astrophysics Data System (ADS)

    Tang, Jingyin; Matyas, Corene J.

    2018-02-01

    Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.

  3. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  4. Tuning HDF5 for Lustre File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Koziol, Quincey; Knaak, David

    2010-09-24

    HDF5 is a cross-platform parallel I/O library that is used by a wide variety of HPC applications for the flexibility of its hierarchical object-database representation of scientific data. We describe our recent work to optimize the performance of the HDF5 and MPI-IO libraries for the Lustre parallel file system. We selected three different HPC applications to represent the diverse range of I/O requirements, and measured their performance on three different systems to demonstrate the robustness of our optimizations across different file system configurations and to validate our optimization strategy. We demonstrate that the combined optimizations improve HDF5 parallel I/O performancemore » by up to 33 times in some cases running close to the achievable peak performance of the underlying file system and demonstrate scalable performance up to 40,960-way concurrency.« less

  5. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  6. Create full-scale predictive economic models on ROI and innovation with performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, Earl C.; Conway, Steve

    The U.S. Department of Energy (DOE), the world's largest buyer and user of supercomputers, awarded IDC Research, Inc. a grant to create two macroeconomic models capable of quantifying, respectively, financial and non-financial (innovation) returns on investments in HPC resources. Following a 2013 pilot study in which we created the models and tested them on about 200 real-world HPC cases, DOE authorized us to conduct a full-out, three-year grant study to collect and measure many more examples, a process that would also subject the methodology to further testing and validation. A secondary, "stretch" goal of the full-out study was to advancemore » the methodology from association toward (but not all the way to) causation, by eliminating the effects of some of the other factors that might be contributing, along with HPC investments, to the returns produced in the investigated projects.« less

  7. A general microchip surface modification approach using a spin-coated polymer resist film doped with hydroxypropyl cellulose.

    PubMed

    Sun, Xiuhua; Yang, Weichun; Geng, Yanli; Woolley, Adam T

    2009-04-07

    We have developed a simple and effective method for surface modification of polymer microchips by entrapping hydroxypropyl cellulose (HPC) in a spin-coated thin film on the surface. Poly(methyl methacrylate-8.5-methacrylic acid), a widely available commercial resist formulation, was utilized as a matrix for dissolving HPC and providing adherence to native polymer surfaces. Various amounts of HPC (0.1-2.0%) dissolved in the copolymer and spun on polymer surfaces were evaluated. The modified surfaces were characterized by contact angle measurement, X-ray photoelectron spectroscopy and atomic force microscopy. The developed method was applied on both poly(methyl methacrylate) and cyclic olefin copolymer microchips. A fluorescently labeled myoglobin digest, binary protein mixture, and human serum sample were all separated in these surface-modified polymer microdevices. Our work exhibits an easy and reliable way to achieve favorable biomolecular separation performance in polymer microchips.

  8. A general microchip surface modification approach using a spin-coated polymer resist film doped with hydroxypropyl cellulose

    PubMed Central

    Sun, Xiuhua; Yang, Weichun; Geng, Yanli; Woolley, Adam T.

    2009-01-01

    We have developed a simple and effective method for surface modification of polymer microchips by entrapping hydroxypropyl cellulose (HPC) in a spin-coated thin film on the surface. Poly(methyl methacrylate-8.5-methacrylic acid), a widely available commercial resist formulation, was utilized as a matrix for dissolving HPC and providing adherence to native polymer surfaces. Various amounts of HPC (0.1–2.0%) dissolved in the copolymer and spun on polymer surfaces were evaluated. The modified surfaces were characterized by contact angle measurement, X-ray photoelectron spectroscopy and atomic force microscopy. The developed method was applied on both poly(methyl methacrylate) and cyclic olefin copolymer microchips. A fluorescently labeled myoglobin digest, binary protein mixture, and human serum sample were all separated in these surface-modified polymer microdevices. Our work exhibits an easy and reliable way to achieve favorable biomolecular separation performance in polymer microchips. PMID:19294306

  9. Sacro-anterior haemangiopericytoma: a case report

    PubMed Central

    Ge, Xiu-Hong; Liu, Shuai-Shuai; Shan, Hu-Sheng; Wang, Zhi-Min; Li, Qian-Wen

    2014-01-01

    Haemangiopericytoma (HPC) is a rare vascular tumor with borderline malignancy, considerable histological variability, and unpredictable clinical and biological behavior. HPC can present a diagnostic challenge because of its indeterminate clinical, radiological, and pathological features. HPC generally presents in adulthood and is equally frequent in both sexes. HPC can arise in any site in the body as a slowly growing and painless mass. The precise cell type origin of HPC is uncertain. One third of HPCs occur in the head and neck areas. Exceptional cases of hemangioblastoma arising outside the head and neck areas have been reported, but little is known about their clinicopathologic and immunohistochemical features. This study reports on a case of a large sacro-anterior HPC in a 65-year-old male. PMID:25009757

  10. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  11. CD271 Defines a Stem Cell-Like Population in Hypopharyngeal Cancer

    PubMed Central

    Imai, Takayuki; Tamai, Keiichi; Oizumi, Sayuri; Oyama, Kyoko; Yamaguchi, Kazunori; Sato, Ikuro; Satoh, Kennichi; Matsuura, Kazuto; Saijo, Shigeru; Sugamura, Kazuo; Tanaka, Nobuyuki

    2013-01-01

    Cancer stem cells contribute to the malignant phenotypes of a variety of cancers, but markers to identify human hypopharyngeal cancer (HPC) stem cells remain poorly understood. Here, we report that the CD271+ population sorted from xenotransplanted HPCs possesses an enhanced tumor-initiating capability in immunodeficient mice. Tumors generated from the CD271+ cells contained both CD271+ and CD271− cells, indicating that the population could undergo differentiation. Immunohistological analyses of the tumors revealed that the CD271+ cells localized to a perivascular niche near CD34+ vasculature, to invasive fronts, and to the basal layer. In accordance with these characteristics, a stemness marker, Nanog, and matrix metalloproteinases (MMPs), which are implicated in cancer invasion, were significantly up-regulated in the CD271+ compared to the CD271− cell population. Furthermore, using primary HPC specimens, we demonstrated that high CD271 expression was correlated with a poor prognosis for patients. Taken together, our findings indicate that CD271 is a novel marker for HPC stem-like cells and for HPC prognosis. PMID:23626764

  12. Sepsis reconsidered: Identifying novel metrics for behavioral landscape characterization with a high-performance computing implementation of an agent-based model.

    PubMed

    Cockrell, Chase; An, Gary

    2017-10-07

    Sepsis affects nearly 1 million people in the United States per year, has a mortality rate of 28-50% and requires more than $20 billion a year in hospital costs. Over a quarter century of research has not yielded a single reliable diagnostic test or a directed therapeutic agent for sepsis. Central to this insufficiency is the fact that sepsis remains a clinical/physiological diagnosis representing a multitude of molecularly heterogeneous pathological trajectories. Advances in computational capabilities offered by High Performance Computing (HPC) platforms call for an evolution in the investigation of sepsis to attempt to define the boundaries of traditional research (bench, clinical and computational) through the use of computational proxy models. We present a novel investigatory and analytical approach, derived from how HPC resources and simulation are used in the physical sciences, to identify the epistemic boundary conditions of the study of clinical sepsis via the use of a proxy agent-based model of systemic inflammation. Current predictive models for sepsis use correlative methods that are limited by patient heterogeneity and data sparseness. We address this issue by using an HPC version of a system-level validated agent-based model of sepsis, the Innate Immune Response ABM (IIRBM), as a proxy system in order to identify boundary conditions for the possible behavioral space for sepsis. We then apply advanced analysis derived from the study of Random Dynamical Systems (RDS) to identify novel means for characterizing system behavior and providing insight into the tractability of traditional investigatory methods. The behavior space of the IIRABM was examined by simulating over 70 million sepsis patients for up to 90 days in a sweep across the following parameters: cardio-respiratory-metabolic resilience; microbial invasiveness; microbial toxigenesis; and degree of nosocomial exposure. In addition to using established methods for describing parameter space, we developed two novel methods for characterizing the behavior of a RDS: Probabilistic Basins of Attraction (PBoA) and Stochastic Trajectory Analysis (STA). Computationally generated behavioral landscapes demonstrated attractor structures around stochastic regions of behavior that could be described in a complementary fashion through use of PBoA and STA. The stochasticity of the boundaries of the attractors highlights the challenge for correlative attempts to characterize and classify clinical sepsis. HPC simulations of models like the IIRABM can be used to generate approximations of the behavior space of sepsis to both establish "boundaries of futility" with respect to existing investigatory approaches and apply system engineering principles to investigate the general dynamic properties of sepsis to provide a pathway for developing control strategies. The issues that bedevil the study and treatment of sepsis, namely clinical data sparseness and inadequate experimental sampling of system behavior space, are fundamental to nearly all biomedical research, manifesting in the "Crisis of Reproducibility" at all levels. HPC-augmented simulation-based research offers an investigatory strategy more consistent with that seen in the physical sciences (which combine experiment, theory and simulation), and an opportunity to utilize the leading advances in HPC, namely deep machine learning and evolutionary computing, to form the basis of an iterative scientific process to meet the full promise of Precision Medicine (right drug, right patient, right time). Copyright © 2017. Published by Elsevier Ltd.

  13. High throughput computing: a solution for scientific analysis

    USGS Publications Warehouse

    O'Donnell, M.

    2011-01-01

    handle job failures due to hardware, software, or network interruptions (obviating the need to manually resubmit the job after each stoppage); be affordable; and most importantly, allow us to complete very large, complex analyses that otherwise would not even be possible. In short, we envisioned a job-management system that would take advantage of unused FORT CPUs within a local area network (LAN) to effectively distribute and run highly complex analytical processes. What we found was a solution that uses High Throughput Computing (HTC) and High Performance Computing (HPC) systems to do exactly that (Figure 1).

  14. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  15. Differential Age-Related Changes in Structural Covariance Networks of Human Anterior and Posterior Hippocampus.

    PubMed

    Li, Xinwei; Li, Qiongling; Wang, Xuetong; Li, Deyu; Li, Shuyu

    2018-01-01

    The hippocampus plays an important role in memory function relying on information interaction between distributed brain areas. The hippocampus can be divided into the anterior and posterior sections with different structure and function along its long axis. The aim of this study is to investigate the effects of normal aging on the structural covariance of the anterior hippocampus (aHPC) and the posterior hippocampus (pHPC). In this study, 240 healthy subjects aged 18-89 years were selected and subdivided into young (18-23 years), middle-aged (30-58 years), and older (61-89 years) groups. The aHPC and pHPC was divided based on the location of uncal apex in the MNI space. Then, the structural covariance networks were constructed by examining their covariance in gray matter volumes with other brain regions. Finally, the influence of age on the structural covariance of these hippocampal sections was explored. We found that the aHPC and pHPC had different structural covariance patterns, but both of them were associated with the medial temporal lobe and insula. Moreover, both increased and decreased covariances were found with the aHPC but only increased covariance was found with the pHPC with age ( p < 0.05, family-wise error corrected). These decreased connections occurred within the default mode network, while the increased connectivity mainly occurred in other memory systems that differ from the hippocampus. This study reveals different age-related influence on the structural networks of the aHPC and pHPC, providing an essential insight into the mechanisms of the hippocampus in normal aging.

  16. Gut vagal sensory signaling regulates hippocampus function through multi-order pathways.

    PubMed

    Suarez, Andrea N; Hsu, Ted M; Liu, Clarissa M; Noble, Emily E; Cortella, Alyssa M; Nakamoto, Emily M; Hahn, Joel D; de Lartigue, Guillaume; Kanoski, Scott E

    2018-06-05

    The vagus nerve is the primary means of neural communication between the gastrointestinal (GI) tract and the brain. Vagally mediated GI signals activate the hippocampus (HPC), a brain region classically linked with memory function. However, the endogenous relevance of GI-derived vagal HPC communication is unknown. Here we utilize a saporin (SAP)-based lesioning procedure to reveal that selective GI vagal sensory/afferent ablation in rats impairs HPC-dependent episodic and spatial memory, effects associated with reduced HPC neurotrophic and neurogenesis markers. To determine the neural pathways connecting the gut to the HPC, we utilize monosynaptic and multisynaptic virus-based tracing methods to identify the medial septum as a relay connecting the medial nucleus tractus solitarius (where GI vagal afferents synapse) to dorsal HPC glutamatergic neurons. We conclude that endogenous GI-derived vagal sensory signaling promotes HPC-dependent memory function via a multi-order brainstem-septal pathway, thereby identifying a previously unknown role for the gut-brain axis in memory control.

  17. Using CyberShake Workflows to Manage Big Seismic Hazard Data on Large-Scale Open-Science HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2015-12-01

    The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and migrating from file-based communication to MPI messaging, to greatly reduce the I/O demands and node-hour requirements of CyberShake. We will also present performance metrics from CyberShake Study 15.4, and discuss challenges that producers of Big Data on open-science HPC resources face moving forward.

  18. Hierarchically Porous Graphitic Carbon with Simultaneously High Surface Area and Colossal Pore Volume Engineered via Ice Templating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estevez, Luis; Prabhakaran, Venkateshkumar; Garcia, Adam L.

    Developing hierarchical porous carbon (HPC) materials with competing textural characteristics such as surface area and pore volume in one material is difficult to accomplish—particulalry for an atomically ordered (graphitic) carbon. Herein we describe a synthesis strategy to engineer tunable hierarchically porous carbon (HPC) materials across micro- meso- and macroporous length scales, allowing the fabrication of a graphitic HPC with both very high surface area (> 2500 m2/g) and pore volume (>10 cm3/g), the combination of which has not been seen previously. The mesopore volume alone for these materials is up to 7.91 cm3/g, the highest ever reported. The unique materialmore » was explored for use as a supercapaictor electrode and for oil adsorption; two applications that require textural properties that are typicaly exclusive to one another. This design scheme for HPCs can be utilized in broad applications, including electrochemical systems such as batteries and supercapacitors, sorbents, and catalyst supports.« less

  19. The Barcelona Hospital Clínic therapeutic apheresis database.

    PubMed

    Cid, Joan; Carbassé, Gloria; Cid-Caballero, Marc; López-Púa, Yolanda; Alba, Cristina; Perea, Dolores; Lozano, Miguel

    2017-09-22

    A therapeutic apheresis (TA) database helps to increase knowledge about indications and type of apheresis procedures that are performed in clinical practice. The objective of the present report was to describe the type and number of TA procedures that were performed at our institution in a 10-year period, from 2007 to 2016. The TA electronic database was created by transferring patient data from electronic medical records and consultation forms into a Microsoft Access database developed exclusively for this purpose. Since 2007, prospective data from every TA procedure were entered in the database. A total of 5940 TA procedures were performed: 3762 (63.3%) plasma exchange (PE) procedures, 1096 (18.5%) hematopoietic progenitor cell (HPC) collections, and 1082 (18.2%) TA procedures other than PEs and HPC collections. The overall trend for the time-period was progressive increase in total number of TA procedures performed each year (from 483 TA procedures in 2007 to 822 in 2016). The tracking trend of each procedure during the 10-year period was different: the number of PE and other type of TA procedures increased 22% and 2818%, respectively, and the number of HPC collections decreased 28%. The TA database helped us to increase our knowledge about various indications and type of TA procedures that were performed in our current practice. We also believe that this database could serve as a model that other institutions can use to track service metrics. © 2017 Wiley Periodicals, Inc.

  20. DCDM1: Lessons Learned from the World's Most Energy Efficient Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sickinger, David E; Van Geet, Otto D; Carter, Thomas

    This presentation discusses the holistic approach to design the world's most energy-efficient data center, which is located at the U.S. Department of Energy National Renewable Energy Laboratory (NREL). This high-performance computing (HPC) data center has achieved a trailing twelve-month average power usage effectiveness (PUE) of 1.04 and features a chiller-less design, component-level warm-water liquid cooling, and waste heat capture and reuse. We provide details of the demonstrated PUE and energy reuse effectiveness (ERE) and lessons learned during four years of production operation. Recent efforts to dramatically reduce the water footprint will also be discussed. Johnson Controls partnered with NREL andmore » Sandia National Laboratories to deploy a thermosyphon cooler (TSC) as a test bed at NREL's HPC data center that resulted in a 50% reduction in water usage during the first year of operation. The Thermosyphon Cooler Hybrid System (TCHS) integrates the control of a dry heat rejection device with an open cooling tower.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Karla

    Although the high-performance computing (HPC) community increasingly embraces object-oriented programming (OOP), most HPC OOP projects employ the C++ programming language. Until recently, Fortran programmers interested in mining the benefits of OOP had to emulate OOP in Fortran 90/95. The advent of widespread compiler support for Fortran 2003 now facilitates explicitly constructing object-oriented class hierarchies via inheritance and leveraging related class behaviors such as dynamic polymorphism. Although C++ allows a class to inherit from multiple parent classes, Fortran and several other OOP languages restrict or prohibit explicit multiple inheritance relationships in order to circumvent several pitfalls associated with them. Nonetheless, whatmore » appears as an intrinsic feature in one language can be modeled as a user-constructed design pattern in another language. The present paper demonstrates how to apply the facade structural design pattern to support a multiple inheritance class relationship in Fortran 2003. As a result, the design unleashes the power of the associated class relationships for modeling complicated data structures yet avoids the ambiguities that plague some multiple inheritance scenarios.« less

  2. Influence of Eco-Friendly Mineral Additives on Early Age Compressive Strength and Temperature Development of High-Performance Concrete

    NASA Astrophysics Data System (ADS)

    Kaszynska, Maria; Skibicki, Szymon

    2017-12-01

    High-performance concrete (HPC) which contains increased amount of both higher grade cement and pozzolanic additives generates more hydration heat than the ordinary concrete. Prolonged periods of elevated temperature influence the rate of hydration process in result affecting the development of early-age strength and subsequent mechanical properties. The purpose of the presented research is to determine the relationship between the kinetics of the heat generation process and the compressive strength of early-age high performance concrete. All mixes were based on the Portland Cement CEM I 52.5 with between 7.5% to 15% of the cement mass replaced by the silica fume or metakaolin. Two characteristic for HPC water/binder ratios of w/b = 0.2 and w/b = 0.3 were chosen. A superplasticizer was used to maintain a 20-50 mm slump. Compressive strength was determined at 8h, 24h, 3, 7 and 28 days on 10x10x10 cm specimens that were cured in a calorimeter in a constant temperature of T = 20°C. The temperature inside the concrete was monitored continuously for 7 days. The study determined that the early-age strength (t<24h) of concrete with reactive mineral additives is lower than concrete without them. This is clearly visible for concretes with metakaolin which had the lowest compressive strength in early stages of hardening. The amount of the superplasticizer significantly influenced the early-age compressive strength of concrete. Concretes with additives reached the maximum temperature later than the concretes without them.

  3. 2011 Computation Directorate Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2012-04-11

    From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less

  4. The influence of loading on the corrosion of steel in cracked ordinary Portland cement and high performance concretes

    NASA Astrophysics Data System (ADS)

    Jaffer, Shahzma Jafferali

    Most studies that have examined chloride-induced corrosion of steel in concrete have focused on sound concrete. However, reinforced concrete is seldom uncracked and very few studies have investigated the influence of cracked concrete on rebar corrosion. Furthermore, the studies that have examined the relationship between cracks and corrosion have focused on unloaded or statically loaded cracks. However, in practice, reinforced concrete structures (e.g. bridges) are often dynamically loaded. Hence, the cracks in such structures open and close which could influence the corrosion of the reinforcing steel. Consequently, the objectives of this project were (i) to examine the effect of different types of loading on the corrosion of reinforcing steel, (ii) the influence of concrete mixture design on the corrosion behaviour and (iii) to provide data that can be used in service-life modelling of cracked reinforced concretes. In this project, cracked reinforced concrete beams made with ordinary Portland cement concrete (OPCC) and high performance concrete (HPC) were subjected to no load, static loading and dynamic loading. They were immersed in salt solution to just above the crack level at their mid-point for two weeks out of every four (wet cycle) and, for the remaining two weeks, were left in ambient laboratory conditions to dry (dry cycle). The wet cycle led to three conditions of exposure for each beam: (i) the non-submerged region, (ii) the sound, submerged region and (iii) the cracked mid-section, which was also immersed in the solution. Linear polarization resistance and galvanostatic pulse techniques were used to monitor the corrosion in the three regions. Potentiodynamic polarization, electrochemical current noise and concrete electrical resistance measurements were also performed. These measurements illustrated that (i) rebar corroded faster at cracks than in sound concrete, (ii) HPC was more protective towards the rebar than OPCC even at cracks and (iii) there was a minor effect of the type of loading on rebar corrosion within the period of the project. These measurements also highlighted the problems associated with corrosion measurements, for example, identifying the actual corroding area and the influence of the length of rebar. The numbers of cracks and crack-widths in each beam were measured after the beam's initial exposure to salt solution and, again, after the final corrosion measurements. HPC beams had more cracks than the OPCC. Also, final measurements illustrated increased crack-widths in dynamically loaded beams, regardless of the concrete type. The cracks in both statically and dynamically loaded OPCC and HPC beams bifurcated at the rebar level and propagated parallel to the rebar. This project also examined the extent of corrosion on the rebars and the distribution of corrosion products in the concrete and on the concrete walls of the cracks. Corrosion occurred only at cracks in the concrete and was spread over a larger area on the rebars in HPC than those in OPCC. The damage due to corrosion was superficial in HPC and crater-like in OPCC. Regardless of the concrete type, there was a larger distribution of corrosion products on the crack walls of the dynamically loaded beams. Corrosion products diffused into the cement paste and the paste-aggregate interface in OPCC but remained in the crack in HPC. The most voluminous corrosion product identified was ferric hydroxide. Elemental analysis of mill-scale on rebar which was not embedded in concrete or exposed to chlorides was compared to that of the bars that had been embedded in uncontaminated concrete and in cracked concrete exposed to chlorides. In uncontaminated concrete, mill-scale absorbed calcium and silicon. At a crack, a layer, composed of a mixture of cement paste and corrosion products, developed between the mill-scale and the substrate steel. Based on the results, it was concluded that (i) corrosion occurred on the rebar only at cracks in the concrete, (ii) corrosion was initiated at the cracks immediately upon exposure to salt solution, (ii) the type of loading had a minor influence on the corrosion rates of reinforcing steel and (iv) the use of polarized area led to a significant underestimation of the current density at the crack. It is recommended that the effect of cover-depth on (i) the time to initiation of corrosion and (ii) the corrosion current density in cracked concrete be investigated.

  5. Nuclear β-Catenin Expression is Frequent in Sinonasal Hemangiopericytoma and Its Mimics.

    PubMed

    Jo, Vickie Y; Fletcher, Christopher D M

    2017-06-01

    Sinonasal hemangiopericytoma (HPC) is a tumor showing pericytic myoid differentiation and which arises in the nasal cavity and paranasal sinuses. CTNNB1 mutations appear to be a consistent aberration in sinonasal HPC, and nuclear expression of β-catenin has been reported. Our aim was to evaluate the frequency of β-catenin expression in sinonasal HPC and its histologic mimics in the upper aerodigestive tract. Cases were retrieved from the surgical pathology and consultation files. Immunohistochemical staining for β-catenin was performed on 50 soft tissue tumors arising in the sinonasal tract or oral cavity, and nuclear staining was recorded semiquantitatively by extent and intensity. Nuclear reactivity for β-catenin was present in 19/20 cases of sinonasal HPC; 17 showed moderate-to-strong multifocal or diffuse staining, and 2 had moderate focal nuclear reactivity. All solitary fibrous tumors (SFT) (10/10) showed focal-to-multifocal nuclear staining, varying from weak to strong in intensity. Most cases of synovial sarcoma (9/10) showed nuclear β-catenin expression in the spindle cell component, ranging from focal-weak to strong-multifocal. No cases of myopericytoma (0/10) showed any nuclear β-catenin expression. β-catenin expression is prevalent in sinonasal HPC, but is also frequent in SFT and synovial sarcoma. Our findings indicate that β-catenin is not a useful diagnostic tool in the evaluation of spindle cell tumors with a prominent hemangiopericytoma-like vasculature in the sinonasal tract and oral cavity, and that definitive diagnosis relies on the use of a broader immunohistochemical panel.

  6. The importance of binder moisture content in Metformin HCL high-dose formulations prepared by moist aqueous granulation (MAG).

    PubMed

    Takasaki, Hiroshi; Yonemochi, Etsuo; Ito, Masanori; Wada, Koichi; Terada, Katsuhide

    2015-01-01

    The aim of this study was to evaluate binders to improve the flowability of granulates and compactibility of Metformin HCL (Met) using the moist aqueous granulation (MAG) process. The effect of the binder moisture content on granulate and tablet quality was also evaluated. Vinylpyrrolidone-vinyl acetate copolymer (Kollidon VA64 fine: VA64), polyvidone (Povidone K12: PVP), hydroxypropyl cellulose (HPC SSL SF: HPC) and hydroxypropyl methylcellulose (Methocel E5 LV: HPMC) were evaluated as binders. These granulates, except for HPMC, had a lower yield pressure than Met active pharmaceutical ingredient (API). HPMC Met was not sufficiently granulated with low water volume. No problems were observed with the VA64 Met granulates during the tableting process. However, HPC Met granulates had a bowl-forming tendency, and PVP Met granulates had the tendency to stick during the tableting process. These bowl-forming and sticking tendencies may have been due to the low moisture absorbency of HPC and the high volume of bound water of PVP, respectively. VA64 Met granulates had the highest ambient moisture content (bulk water, bound water) and moisture absorbency. It was concluded that the type of binder used for the Met MAG process has an impact on granulate flow and compactibility, as well as moisture absorbency and maintenance of moisture balance.

  7. The importance of binder moisture content in Metformin HCL high-dose formulations prepared by moist aqueous granulation (MAG)

    PubMed Central

    Takasaki, Hiroshi; Yonemochi, Etsuo; Ito, Masanori; Wada, Koichi; Terada, Katsuhide

    2015-01-01

    The aim of this study was to evaluate binders to improve the flowability of granulates and compactibility of Metformin HCL (Met) using the moist aqueous granulation (MAG) process. The effect of the binder moisture content on granulate and tablet quality was also evaluated. Vinylpyrrolidone–vinyl acetate copolymer (Kollidon VA64 fine: VA64), polyvidone (Povidone K12: PVP), hydroxypropyl cellulose (HPC SSL SF: HPC) and hydroxypropyl methylcellulose (Methocel E5 LV: HPMC) were evaluated as binders. These granulates, except for HPMC, had a lower yield pressure than Met active pharmaceutical ingredient (API). HPMC Met was not sufficiently granulated with low water volume. No problems were observed with the VA64 Met granulates during the tableting process. However, HPC Met granulates had a bowl-forming tendency, and PVP Met granulates had the tendency to stick during the tableting process. These bowl-forming and sticking tendencies may have been due to the low moisture absorbency of HPC and the high volume of bound water of PVP, respectively. VA64 Met granulates had the highest ambient moisture content (bulk water, bound water) and moisture absorbency. It was concluded that the type of binder used for the Met MAG process has an impact on granulate flow and compactibility, as well as moisture absorbency and maintenance of moisture balance. PMID:26779418

  8. Triclosan resistant bacteria in sewage effluent and cross-resistance to antibiotics.

    PubMed

    Coetzee, I; Bezuidenhout, C C; Bezuidenhout, J J

    2017-09-01

    The purpose of this study was to identify triclosan tolerant heterotrophic plate count (HPC) bacteria from sewage effluent and to determine cross-resistance to antibiotics. R2 agar supplemented with triclosan was utilised to isolate triclosan resistant bacteria and 16S rRNA gene sequencing was conducted to identify the isolates. Minimum inhibitory concentrations (MICs) of organisms were determined at selected concentrations of triclosan and cross-resistance to various antibiotics was performed. High-performance liquid chromatography was conducted to quantify levels of triclosan in sewage water. Forty-four HPC were isolated and identified as the five main genera, namely, Bacillus, Pseudomonas, Enterococcus, Brevibacillus and Paenibacillus. MIC values of these isolates ranged from 0.125 mg/L to >1 mg/L of triclosan, while combination of antimicrobials indicated synergism or antagonism. Levels of triclosan within the wastewater treatment plant (WWTP) ranged between 0.026 and 1.488 ppb. Triclosan concentrations were reduced by the WWTP, but small concentrations enter receiving freshwater bodies. Results presented indicate that these levels are sufficient to maintain triclosan resistant bacteria under controlled conditions. Further studies are thus needed into the impact of this scenario on such natural receiving water bodies.

  9. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.

    PubMed

    Stamatakis, Alexandros

    2006-11-01

    RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak

  10. Gyrokinetic particle-in-cell optimization on emerging multi- and manycore platforms

    DOE PAGES

    Madduri, Kamesh; Im, Eun-Jin; Ibrahim, Khaled Z.; ...

    2011-03-02

    The next decade of high-performance computing (HPC) systems will see a rapid evolution and divergence of multi- and manycore architectures as power and cooling constraints limit increases in microprocessor clock speeds. Understanding efficient optimization methodologies on diverse multicore designs in the context of demanding numerical methods is one of the greatest challenges faced today by the HPC community. In this paper, we examine the efficient multicore optimization of GTC, a petascale gyrokinetic toroidal fusion code for studying plasma microturbulence in tokamak devices. For GTC’s key computational components (charge deposition and particle push), we explore efficient parallelization strategies across a broadmore » range of emerging multicore designs, including the recently-released Intel Nehalem-EX, the AMD Opteron Istanbul, and the highly multithreaded Sun UltraSparc T2+. We also present the first study on tuning gyrokinetic particle-in-cell (PIC) algorithms for graphics processors, using the NVIDIA C2050 (Fermi). Our work discusses several novel optimization approaches for gyrokinetic PIC, including mixed-precision computation, particle binning and decomposition strategies, grid replication, SIMDized atomic floating-point operations, and effective GPU texture memory utilization. Overall, we achieve significant performance improvements of 1.3–4.7× on these complex PIC kernels, despite the inherent challenges of data dependency and locality. Finally, our work also points to several architectural and programming features that could significantly enhance PIC performance and productivity on next-generation architectures.« less

  11. Using Formal Grammars to Predict I/O Behaviors in HPC: The Omnisc'IO Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Ibrahim, Shadi; Antoniu, Gabriel

    2016-08-01

    The increasing gap between the computation performance of post-petascale machines and the performance of their I/O subsystem has motivated many I/O optimizations including prefetching, caching, and scheduling. In order to further improve these techniques, modeling and predicting spatial and temporal I/O patterns of HPC applications as they run has become crucial. In this paper we present Omnisc'IO, an approach that builds a grammar-based model of the I/O behavior of HPC applications and uses it to predict when future I/O operations will occur, and where and how much data will be accessed. To infer grammars, Omnisc'IO is based on StarSequitur, amore » novel algorithm extending Nevill-Manning's Sequitur algorithm. Omnisc'IO is transparently integrated into the POSIX and MPI I/O stacks and does not require any modification in applications or higher-level I/O libraries. It works without any prior knowledge of the application and converges to accurate predictions of any N future I/O operations within a couple of iterations. Its implementation is efficient in both computation time and memory footprint.« less

  12. Conceptual Design of a Two Spool Compressor for the NASA Large Civil Tilt Rotor Engine

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Thurman, Douglas R.

    2010-01-01

    This paper focuses on the conceptual design of a two spool compressor for the NASA Large Civil Tilt Rotor engine, which has a design-point pressure ratio goal of 30:1 and an inlet weight flow of 30.0 lbm/sec. The compressor notional design requirements of pressure ratio and low-pressure compressor (LPC) and high pressure ratio compressor (HPC) work split were based on a previous engine system study to meet the mission requirements of the NASA Subsonic Rotary Wing Projects Large Civil Tilt Rotor vehicle concept. Three mean line compressor design and flow analysis codes were utilized for the conceptual design of a two-spool compressor configuration. This study assesses the technical challenges of design for various compressor configuration options to meet the given engine cycle results. In the process of sizing, the technical challenges of the compressor became apparent as the aerodynamics were taken into consideration. Mechanical constraints were considered in the study such as maximum rotor tip speeds and conceptual sizing of rotor disks and shafts. The rotor clearance-to-span ratio in the last stage of the LPC is 1.5% and in the last stage of the HPC is 2.8%. Four different configurations to meet the HPC requirements were studied, ranging from a single stage centrifugal, two axi-centrifugals, and all axial stages. Challenges of the HPC design include the high temperature (1,560deg R) at the exit which could limit the maximum allowable peripheral tip speed for centrifugals, and is dependent on material selection. The mean line design also resulted in the definition of the flow path geometry of the axial and centrifugal compressor stages, rotor and stator vane angles, velocity components, and flow conditions at the leading and trailing edges of each blade row at the hub, mean and tip. A mean line compressor analysis code was used to estimate the compressor performance maps at off-design speeds and to determine the required variable geometry reset schedules of the inlet guide vane and variable stators that would result in the transonic stages being aerodynamically matched with high efficiency and acceptable stall margins based on user specified maximum levels of rotor diffusion factor and relative velocity ratio.

  13. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  14. Examination of Calcium Silicate Cements with Low-Viscosity Methyl Cellulose or Hydroxypropyl Cellulose Additive.

    PubMed

    Baba, Toshiaki; Tsujimoto, Yasuhisa

    2016-01-01

    The purpose of this study was to improve the operability of calcium silicate cements (CSCs) such as mineral trioxide aggregate (MTA) cement. The flow, working time, and setting time of CSCs with different compositions containing low-viscosity methyl cellulose (MC) or hydroxypropyl cellulose (HPC) additive were examined according to ISO 6876-2012; calcium ion release analysis was also conducted. MTA and low-heat Portland cement (LPC) including 20% fine particle zirconium oxide (ZO group), LPC including zirconium oxide and 2 wt% low-viscosity MC (MC group), and HPC (HPC group) were tested. MC and HPC groups exhibited significantly higher flow values and setting times than other groups ( p < 0.05). Additionally, flow values of these groups were higher than the ISO 6876-2012 reference values; furthermore, working times were over 10 min. Calcium ion release was retarded with ZO, MC, and HPC groups compared with MTA. The concentration of calcium ions was decreased by the addition of the MC or HPC group compared with the ZO group. When low-viscosity MC or HPC was added, the composition of CSCs changed, thus fulfilling the requirements for use as root canal sealer. Calcium ion release by CSCs was affected by changing the CSC composition via the addition of MC or HPC.

  15. Examination of Calcium Silicate Cements with Low-Viscosity Methyl Cellulose or Hydroxypropyl Cellulose Additive

    PubMed Central

    Tsujimoto, Yasuhisa

    2016-01-01

    The purpose of this study was to improve the operability of calcium silicate cements (CSCs) such as mineral trioxide aggregate (MTA) cement. The flow, working time, and setting time of CSCs with different compositions containing low-viscosity methyl cellulose (MC) or hydroxypropyl cellulose (HPC) additive were examined according to ISO 6876-2012; calcium ion release analysis was also conducted. MTA and low-heat Portland cement (LPC) including 20% fine particle zirconium oxide (ZO group), LPC including zirconium oxide and 2 wt% low-viscosity MC (MC group), and HPC (HPC group) were tested. MC and HPC groups exhibited significantly higher flow values and setting times than other groups (p < 0.05). Additionally, flow values of these groups were higher than the ISO 6876-2012 reference values; furthermore, working times were over 10 min. Calcium ion release was retarded with ZO, MC, and HPC groups compared with MTA. The concentration of calcium ions was decreased by the addition of the MC or HPC group compared with the ZO group. When low-viscosity MC or HPC was added, the composition of CSCs changed, thus fulfilling the requirements for use as root canal sealer. Calcium ion release by CSCs was affected by changing the CSC composition via the addition of MC or HPC. PMID:27981048

  16. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  17. HPC simulations of shock front evolution for a study of the shock precursor decay in a submicron thick nanocrystalline aluminum

    NASA Astrophysics Data System (ADS)

    Valisetty, R.; Rajendran, A.; Agarwal, G.; Dongare, A.; Ianni, J.; Namburu, R.

    2018-07-01

    The Hugoniot elastic limit (HEL, or the shock precursor) decay phenomenon was investigated under an uniaxial strain condition, in a plate-on-plate impact configuration, using large-scale molecular dynamics (MD) high performance computing (HPC) simulations on a multi-billion 5000 Å thick nanocrystalline aluminum (nc-Al) system with an average grain size of 1000 Å and at five impact velocities ranging from 0.7 to 1.5 km s‑1. The averaged stress and strain distributions were obtained in the shock fronts’ travel direction using a material conserving atom slicing method. The loading paths in terms of the Rayleigh lines experienced by the atom system in the evolving shock fronts exhibited a strong dependency on the shock stress levels. This dependency decreased as the impact velocity increased from 0.7 to 1.5 km s‑1. By combining the HELs from MD results with plate impact experimental data, the precursor decay for the nc-Al was predicted from nano-to-macro scale thickness range. The evolving shock fronts were characterized in terms of parameters such as the shock front thickness, shock rise time and strain rate. The MD results were further analyzed using a crystal analysis algorithm and a twin dislocation identification method to obtain the densities of the atomistic defects evolving behind the evolving shock fronts. High-fidelity large-scale HPC simulation results showed that certain dislocation partials strongly influenced the elastic–plastic transition response across the HELs. The twinning dislocations increased by more than a factor of 10 during the transition and remained constant under further shock compression.

  18. Hair cortisol and progesterone detection in dairy cattle: interrelation with physiological status and milk production.

    PubMed

    Tallo-Parra, O; Carbajal, A; Monclús, L; Manteca, X; Lopez-Bejar, M

    2018-07-01

    Hair cortisol concentrations (HCCs) and hair progesterone concentrations (HPCs) allow monitoring long-term retrospective steroid levels. However, there are still gaps in the knowledge of the mechanisms of steroid deposition in hair and its potential application in dairy cattle research. This study aimed to evaluate the potential uses of hair steroid determinations by studying the interrelations between HCC, HPC, physiological data from cows, and their milk production and quality. Cortisol and progesterone concentrations were analyzed in hair from 101 milking Holstein Friesian cows in a commercial farm. Physiological data were obtained from the 60 d prior to hair collection. Moreover, productive data from the month when hair was collected and the previous one were also obtained as well as at 124 d after hair sampling. Significant but weak correlations were found between HCC and HPC (r = 0.25, P < 0.0001) and between HPC and age (r = 0.06, P = 0.0133). High HCC were associated with low milk yields from the 2 previous months to hair sampling (P = 0.0396) and during the whole lactation (P < 0.0001). High HCC were also related to high somatic cell count (P = 0.0241). No effect of HCC on fat or protein content was detected. No significant correlations were detected between hair steroid concentrations and pregnancy status, days of gestation, parturition category (primiparous vs multiparous), number of lactations or days in milk. The relationship between physiological variables and HCC or HPC could depend on the duration of the time period over which hair accumulates hormones. Steroid concentrations in hair present high variability between individuals but are a potential tool for dairy cattle welfare and production research by providing a useful and practical tool for long-term steroid monitoring. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Python and HPC for High Energy Physics Data Analyses

    DOE PAGES

    Sehrish, S.; Kowalkowski, J.; Paterno, M.; ...

    2017-01-01

    High level abstractions in Python that can utilize computing hardware well seem to be an attractive option for writing data reduction and analysis tasks. In this paper, we explore the features available in Python which are useful and efficient for end user analysis in High Energy Physics (HEP). A typical vertical slice of an HEP data analysis is somewhat fragmented: the state of the reduction/analysis process must be saved at certain stages to allow for selective reprocessing of only parts of a generally time-consuming workflow. Also, algorithms tend to to be modular because of the heterogeneous nature of most detectorsmore » and the need to analyze different parts of the detector separately before combining the information. This fragmentation causes difficulties for interactive data analysis, and as data sets increase in size and complexity (O10 TiB for a “small” neutrino experiment to the O10 PiB currently held by the CMS experiment at the LHC), data analysis methods traditional to the field must evolve to make optimum use of emerging HPC technologies and platforms. Mainstream big data tools, while suggesting a direction in terms of what can be done if an entire data set can be available across a system and analysed with high-level programming abstractions, are not designed with either scientific computing generally, or modern HPC platform features in particular, such as data caching levels, in mind. Our example HPC use case is a search for a new elementary particle which might explain the phenomenon known as “Dark Matter”. Here, using data from the CMS detector, we will use HDF5 as our input data format, and MPI with Python to implement our use case.« less

  20. Lateral transport of solutes in microfluidic channels using electrochemically generated gradients in redox-active surfactants.

    PubMed

    Liu, Xiaoyang; Abbott, Nicholas L

    2011-04-15

    We report principles for a continuous flow process that can separate solutes based on a driving force for selective transport that is generated by a lateral concentration gradient of a redox-active surfactant across a microfluidic channel. Microfluidic channels fabricated with gold electrodes lining each vertical wall were used to electrochemically generate concentration gradients of the redox-active surfactant 11-ferrocenylundecyl-trimethylammonium bromide (FTMA) in a direction perpendicular to the flow. The interactions of three solutes (a hydrophobic dye, 1-phenylazo-2-naphthylamine (yellow AB), an amphiphilic molecule, 2-(4,4-difluoro-5,7-dimethyl-4-bora-3a,4a-diaza-s-indacene-3-pentanoyl)-1-hexadecanoyl-sn-glycero-3-phosphocholine (BODIPY C(5)-HPC), and an organic salt, 1-methylpyridinium-3-sulfonate (MPS)) with the lateral gradients in surfactant/micelle concentration were shown to drive the formation of solute-specific concentration gradients. Two distinct physical mechanisms were identified to lead to the solute concentration gradients: solubilization of solutes by micelles and differential adsorption of the solutes onto the walls of the microchannels in the presence of the surfactant concentration gradient. These two mechanisms were used to demonstrate delipidation of a mixture of BODIPY C(5)-HPC (lipid) and MPS and purification of BODIPY C(5)-HPC from a mixture of BODIPY C(5)-HPC and yellow AB. Overall, the results of this study demonstrate that lateral concentration gradients of redox-active surfactants formed within microfluidic channels can be used to transport solutes across the microfluidic channels in a solute-dependent manner. The approach employs electrical potentials (<1 V) that are sufficiently small to avoid electrolysis of water, can be performed in solutions having high ionic strength (>0.1M), and offers the basis of continuous processes for the purification or separation of solutes in microscale systems. © 2011 American Chemical Society

Top