Sample records for llnl hpc linux

  1. Linux containers for fun and profit in HPC

    DOE PAGES

    Priedhorsky, Reid; Randles, Timothy C.

    2017-10-01

    This article outlines options for user-defined software stacks from an HPC perspective. Here, we argue that a lightweight approach based on Linux containers is most suitable for HPC centers because it provides the best balance between maximizing service of user needs and minimizing risks. We also discuss how containers work and several implementations, including Charliecloud, our own open-source solution developed at Los Alamos.

  2. Linux containers for fun and profit in HPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Timothy C.

    This article outlines options for user-defined software stacks from an HPC perspective. Here, we argue that a lightweight approach based on Linux containers is most suitable for HPC centers because it provides the best balance between maximizing service of user needs and minimizing risks. We also discuss how containers work and several implementations, including Charliecloud, our own open-source solution developed at Los Alamos.

  3. STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC

    NASA Astrophysics Data System (ADS)

    Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark

    2017-10-01

    As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.

  4. 2011 Computation Directorate Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2012-04-11

    From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less

  5. CDAC Student Report: Summary of LLNL Internship

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herriman, Jane E.

    Multiple objectives motivated me to apply for an internship at LLNL: I wanted to experience the work environment at a national lab, to learn about research and job opportunities at LLNL in particular, and to gain greater experience with code development, particularly within the realm of high performance computing (HPC). This summer I was selected to participate in LLNL's Computational Chemistry and Material Science Summer Institute (CCMS). CCMS is a 10 week program hosted by the Quantum Simulations group leader, Dr. Eric Schwegler. CCMS connects graduate students to mentors at LLNL involved in similar re- search and provides weekly seminarsmore » on a broad array of topics from within chemistry and materials science. Dr. Xavier Andrade and Dr. Erik Draeger served as my co-mentors over the summer, and Dr. Andrade continues to mentor me now that CCMS has concluded. Dr. Andrade is a member of the Quantum Simulations group within the Physical and Life Sciences at LLNL, and Dr. Draeger leads the HPC group within the Center for Applied Scientific Computing (CASC). The two have worked together to develop Qb@ll, an open-source first principles molecular dynamics code that was the platform for my summer research project.« less

  6. Multiple Independent File Parallel I/O with HDF5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M. C.

    2016-07-13

    The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as O(10 6) parallel tasks.

  7. Linux VPN Set Up | High-Performance Computing | NREL

    Science.gov Websites

    methods to connect to NREL's HPC systems via the HPC VPN: one using a simple command line, and a second UserID in place of the one in the example image. Connection name: hpcvpn Gateway: hpcvpn.nrel.gov User hpcvpn option as seen in the following screen shot. Screenshot image NetworkManager will present you with

  8. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    NASA Astrophysics Data System (ADS)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  9. Strengthening LLNL Missions through Laboratory Directed Research and Development in High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willis, D. K.

    2016-12-01

    High performance computing (HPC) has been a defining strength of Lawrence Livermore National Laboratory (LLNL) since its founding. Livermore scientists have designed and used some of the world’s most powerful computers to drive breakthroughs in nearly every mission area. Today, the Laboratory is recognized as a world leader in the application of HPC to complex science, technology, and engineering challenges. Most importantly, HPC has been integral to the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship Program—designed to ensure the safety, security, and reliability of our nuclear deterrent without nuclear testing. A critical factor behind Lawrence Livermore’s preeminence in HPC ismore » the ongoing investments made by the Laboratory Directed Research and Development (LDRD) Program in cutting-edge concepts to enable efficient utilization of these powerful machines. Congress established the LDRD Program in 1991 to maintain the technical vitality of the Department of Energy (DOE) national laboratories. Since then, LDRD has been, and continues to be, an essential tool for exploring anticipated needs that lie beyond the planning horizon of our programs and for attracting the next generation of talented visionaries. Through LDRD, Livermore researchers can examine future challenges, propose and explore innovative solutions, and deliver creative approaches to support our missions. The present scientific and technical strengths of the Laboratory are, in large part, a product of past LDRD investments in HPC. Here, we provide seven examples of LDRD projects from the past decade that have played a critical role in building LLNL’s HPC, computer science, mathematics, and data science research capabilities, and describe how they have impacted LLNL’s mission.« less

  10. David Whiteside | NREL

    Science.gov Websites

    Whiteside David Whiteside HPC System Administrator David.Whiteside@nrel.gov | 303-275-3943 David . David has over 10 years of experience with Linux administration and a strong background in system

  11. birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.

    PubMed

    Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir

    2011-05-01

    birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.

  12. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov Websites

    and password will match your NREL network account login/password. From OS X or Linux, open a terminal finalized. Open a Remote Desktop connection using server name WINHPC02 (this is the login node). Mac Mac

  13. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  14. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  15. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  16. HARE: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mckie, Jim

    2012-01-09

    This report documents the results of work done over a 6 year period under the FAST-OS programs. The first effort was called Right-Weight Kernels, (RWK) and was concerned with improving measurements of OS noise so it could be treated quantitatively; and evaluating the use of two operating systems, Linux and Plan 9, on HPC systems and determining how these operating systems needed to be extended or changed for HPC, while still retaining their general-purpose nature. The second program, HARE, explored the creation of alternative runtime models, building on RWK. All of the HARE work was done on Plan 9. Themore » HARE researchers were mindful of the very good Linux and LWK work being done at other labs and saw no need to recreate it. Even given this limited funding, the two efforts had outsized impact: _ Helped Cray decide to use Linux, instead of a custom kernel, and provided the tools needed to make Linux perform well _ Created a successor operating system to Plan 9, NIX, which has been taken in by Bell Labs for further development _ Created a standard system measurement tool, Fixed Time Quantum or FTQ, which is widely used for measuring operating systems impact on applications _ Spurred the use of the 9p protocol in several organizations, including IBM _ Built software in use at many companies, including IBM, Cray, and Google _ Spurred the creation of alternative runtimes for use on HPC systems _ Demonstrated that, with proper modifications, a general purpose operating systems can provide communications up to 3 times as effective as user-level libraries Open source was a key part of this work. The code developed for this project is in wide use and available at many places. The core Blue Gene code is available at https://bitbucket.org/ericvh/hare. We describe details of these impacts in the following sections. The rest of this report is organized as follows: First, we describe commercial impact; next, we describe the FTQ benchmark and its impact in more detail; operating systems and runtime research follows; we discuss infrastructure software; and close with a description of the new NIX operating system, future work, and conclusions.« less

  17. The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Barry Y.

    The proliferation of inexpensive sensor technologies like the ubiquitous digital image sensors has resulted in the collection and sharing of vast amounts of unsorted and unexploited raw data. Companies and governments who are able to collect and make sense of large datasets to help them make better decisions more rapidly will have a competitive advantage in the information era. Machine Learning technologies play a critical role for automating the data understanding process; however, to be maximally effective, useful intermediate representations of the data are required. These representations or “features” are transformations of the raw data into a form where patternsmore » are more easily recognized. Recent breakthroughs in Deep Learning have made it possible to learn these features from large amounts of labeled data. The focus of this project is to develop and extend Deep Learning algorithms for learning features from vast amounts of unlabeled data and to develop the HPC neural network training platform to support the training of massive network models. This LDRD project succeeded in developing new unsupervised feature learning algorithms for images and video and created a scalable neural network training toolkit for HPC. Additionally, this LDRD helped create the world’s largest freely-available image and video dataset supporting open multimedia research and used this dataset for training our deep neural networks. This research helped LLNL capture several work-for-others (WFO) projects, attract new talent, and establish collaborations with leading academic and commercial partners. Finally, this project demonstrated the successful training of the largest unsupervised image neural network using HPC resources and helped establish LLNL leadership at the intersection of Machine Learning and HPC research.« less

  18. A Framework for Adaptable Operating and Runtime Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterling, Thomas

    The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain throughmore » technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.« less

  19. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  20. Peregrine System User Basics | High-Performance Computing | NREL

    Science.gov Websites

    peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y

  1. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  2. Level-2 Milestone 5213. CTS-1 Contract Award Completed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, Matt

    2015-09-24

    This report documents the fact that the first commodity technology (CT) system contract award, CTS-1, has been completed. The description of the milestone is: Based on Tri-Lab CTS-1 process and review, LLNL successfully awards the procurement for the next-generation Tri-Lab Linux CTS-1. The milestone completion criterion is: Signed contract. The milestone was completed on September 24th. 2015.

  3. Establishing Linux Clusters for High-Performance Computing (HPC) at NPS

    DTIC Science & Technology

    2004-09-01

    52 e. Intel Roll..................................................................................53 f. Area51 Roll...results of generating md5summ for Area51 roll. All the file information is available. This number can be used to be checked against the number that the...vendor provides fro the particular piece of software. ......51 Figure 22 The given md5summ for Area51 roll form the download site. This number can

  4. Improving Block-level Efficiency with scsi-mq

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caldwell, Blake A

    2015-01-01

    Current generation solid-state storage devices are exposing a new bottlenecks in the SCSI and block layers of the Linux kernel, where IO throughput is limited by lock contention, inefficient interrupt handling, and poor memory locality. To address these limitations, the Linux kernel block layer underwent a major rewrite with the blk-mq project to move from a single request queue to a multi-queue model. The Linux SCSI subsystem rework to make use of this new model, known as scsi-mq, has been merged into the Linux kernel and work is underway for dm-multipath support in the upcoming Linux 4.0 kernel. These piecesmore » were necessary to make use of the multi-queue block layer in a Lustre parallel filesystem with high availability requirements. We undertook adding support of the 3.18 kernel to Lustre with scsi-mq and dm-multipath patches to evaluate the potential of these efficiency improvements. In this paper we evaluate the block-level performance of scsi-mq with backing storage hardware representative of a HPC-targerted Lustre filesystem. Our findings show that SCSI write request latency is reduced by as much as 13.6%. Additionally, when profiling the CPU usage of our prototype Lustre filesystem, we found that CPU idle time increased by a factor of 7 with Linux 3.18 and blk-mq as compared to a standard 2.6.32 Linux kernel. Our findings demonstrate increased efficiency of the multi-queue block layer even with disk-based caching storage arrays used in existing parallel filesystems.« less

  5. GOTCHA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poliakoff, David; Legendre, Matt

    2017-03-29

    GOTCHA is a runtime API intercepting function calls between shared libraries. It is intended to be used by HPC Tools (i.e., performance analysis tools like Open/SpeedShop, HPCToolkit, TAU, etc.). 2:18 PMThese other tools can use Gotch to intercept interesting functions, such as MPI functions, and collect performance metrics about those functions. We intend for this to be open-source software that gets adopted by other open-s0urse tools that are used at LLNL.

  6. Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture

    NASA Astrophysics Data System (ADS)

    Glosli, James

    2013-03-01

    With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Charliecloud: Unprivileged containers for user-defined software stacks in HPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Timothy C.

    Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining accessmore » to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.« less

  8. Cross Domain Deterrence: Livermore Technical Report, 2014-2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Peter D.; Bahney, Ben; Matarazzo, Celeste

    2016-08-03

    Lawrence Livermore National Laboratory (LLNL) is an original collaborator on the project titled “Deterring Complex Threats: The Effects of Asymmetry, Interdependence, and Multi-polarity on International Strategy,” (CDD Project) led by the UC Institute on Global Conflict and Cooperation at UCSD under PIs Jon Lindsay and Erik Gartzke , and funded through the DoD Minerva Research Initiative. In addition to participating in workshops and facilitating interaction among UC social scientists, LLNL is leading the computational modeling effort and assisting with empirical case studies to probe the viability of analytic, modeling and data analysis concepts. This report summarizes LLNL work on themore » CDD Project to date, primarily in Project Years 1-2, corresponding to Federal fiscal year 2015. LLNL brings two unique domains of expertise to bear on this Project: (1) access to scientific expertise on the technical dimensions of emerging threat technology, and (2) high performance computing (HPC) expertise, required for analyzing the complexity of bargaining interactions in the envisioned threat models. In addition, we have a small group of researchers trained as social scientists who are intimately familiar with the International Relations research. We find that pairing simulation scientists, who are typically trained in computer science, with domain experts, social scientists in this case, is the most effective route to developing powerful new simulation tools capable of representing domain concepts accurately and answering challenging questions in the field.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    East, D. R.; Sexton, J.

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and IBM TJ Watson Research Center to research, assess feasibility and develop an implementation plan for a High Performance Computing Innovation Center (HPCIC) in the Livermore Valley Open Campus (LVOC). The ultimate goal of this work was to help advance the State of California and U.S. commercial competitiveness in the arena of High Performance Computing (HPC) by accelerating the adoption of computational science solutions, consistent with recent DOE strategy directives. The desired result of this CRADA was a well-researched,more » carefully analyzed market evaluation that would identify those firms in core sectors of the US economy seeking to adopt or expand their use of HPC to become more competitive globally, and to define how those firms could be helped by the HPCIC with IBM as an integral partner.« less

  10. Improving Memory Error Handling Using Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less

  11. Coupled hydro-meteorological modelling on a HPC platform for high-resolution extreme weather impact study

    NASA Astrophysics Data System (ADS)

    Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian

    2016-11-01

    Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.

  12. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  13. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are designed for Linux operating systems (OS), the arrival of the WindowsHPC 2008 OS provides the opportunity to evaluate the use of a new platform on which to develop and port climate and earth science models. In particular, we are evaluating Microsoft's Visual Studio Integrated Developer Environment to determine its appropriateness for the climate modeling community. In the initial phases of this project, we have ported GEOS-5, WRF, GISS ModelE, and GFS to Linux on a CX1 and are in the process of porting WRF and ModelE to WindowsHPC 2008. Initial tests on the CX1 Linux OS indicate favorable comparisons in terms of performance and consistency of scientific results when compared with experiments executed on NASA high end systems. As in the past, NASA's large clusters will continue to be an important part of our objectives. We envision a seamless environment in which an investigator performs model development and testing on a desktop system and can seamlessly transfer execution to supercomputer clusters for production.

  14. Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tock, Yoav; Mandler, Benjamin; Moreira, Jose

    2013-01-01

    As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliencymore » and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.« less

  15. Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing

    NASA Astrophysics Data System (ADS)

    Tang, Jingyin; Matyas, Corene J.

    2018-02-01

    Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    KURTZER, GREGORY; MURIKI, KRISHNA

    Singularity is a container solution designed to facilitate mobility of compute across systems and HPC infrastructures. It does this by creating minimal containers that are defined by a specfile and files from the host system are used to build the container. The resulting container can then be launched by any Linux computer with Singularity installed regardless if the programs inside the container are present on the target system, or if they are a different version, or even incompatible versions. Singularity achieves extreme portability without sacrificing usability thus solving the need of mobility of compute. Singularity containers can be executed withinmore » a normal/standard command line process flow.« less

  17. Using SW4 for 3D Simulations of Earthquake Strong Ground Motions: Application to Near-Field Strong Motion, Building Response, Basin Edge Generated Waves and Earthquakes in the San Francisco Bay Are

    NASA Astrophysics Data System (ADS)

    Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.

    2016-12-01

    Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.

  18. GEOS. User Tutorials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Pengchen; Settgast, Randolph R.; Johnson, Scott M.

    2014-12-17

    GEOS is a massively parallel, multi-physics simulation application utilizing high performance computing (HPC) to address subsurface reservoir stimulation activities with the goal of optimizing current operations and evaluating innovative stimulation methods. GEOS enables coupling of di erent solvers associated with the various physical processes occurring during reservoir stimulation in unique and sophisticated ways, adapted to various geologic settings, materials and stimulation methods. Developed at the Lawrence Livermore National Laboratory (LLNL) as a part of a Laboratory-Directed Research and Development (LDRD) Strategic Initiative (SI) project, GEOS represents the culmination of a multi-year ongoing code development and improvement e ort that hasmore » leveraged existing code capabilities and sta expertise to design new computational geosciences software.« less

  19. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    PubMed

    Sharma, Parichit; Mantri, Shrikant S

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis.

  20. WImpiBLAST: Web Interface for mpiBLAST to Help Biologists Perform Large-Scale Annotation Using High Performance Computing

    PubMed Central

    Sharma, Parichit; Mantri, Shrikant S.

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis. PMID:24979410

  1. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.

    PubMed

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).

  2. A Fault-Oblivious Extreme-Scale Execution Environment (FOX)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Hensbergen, Eric; Speight, William; Xenidis, Jimi

    IBM Research’s contribution to the Fault Oblivious Extreme-scale Execution Environment (FOX) revolved around three core research deliverables: • collaboration with Boston University around the Kittyhawk cloud infrastructure which both enabled a development and deployment platform for the project team and provided a fault-injection testbed to evaluate prototypes • operating systems research focused on exploring role-based operating system technologies through collaboration with Sandia National Labs on the NIX research operating system and collaboration with the broader IBM Research community around a hybrid operating system model which became known as FusedOS • IBM Research also participated in an advisory capacity with themore » Boston University SESA project, the core of which was derived from the K42 operating system research project funded in part by DARPA’s HPCS program. Both of these contributions were built on a foundation of previous operating systems research funding by the Department of Energy’s FastOS Program. Through the course of the X-stack funding we were able to develop prototypes, deploy them on production clusters at scale, and make them available to other researchers. As newer hardware, in the form of BlueGene/Q, came online, we were able to port the prototypes to the new hardware and release the source code for the resulting prototypes as open source to the community. In addition to the open source coded for the Kittyhawk and NIX prototypes, we were able to bring the BlueGene/Q Linux patches up to a more recent kernel and contribute them for inclusion by the broader Linux community. The lasting impact of the IBM Research work on FOX can be seen in its effect on the shift of IBM’s approach to HPC operating systems from Linux and Compute Node Kernels to role-based approaches as prototyped by the NIX and FusedOS work. This impact can be seen beyond IBM in follow-on ideas being incorporated into the proposals for the Exasacale Operating Systems/Runtime program.« less

  3. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System

    PubMed Central

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornung, Richard D.; Hones, Holger E.

    The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. Theremore » are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.« less

  5. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter

    2015-12-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.

  6. Hierarchically porous carbon/polyaniline hybrid for use in supercapacitors.

    PubMed

    Joo, Min Jae; Yun, Young Soo; Jin, Hyoung-Joon

    2014-12-01

    A hierarchically porous carbon (HPC)/polyaniline (PANI) hybrid electrode was prepared by the polymerization of PANI on the surface of the HPC via rapid-mixing polymerization. The surface morphologies and chemical composition of the HPC/PANI hybrid electrode were characterized using transmission electron microscopy and X-ray photoelectron spectroscopy (XPS), respectively. The surface morphologies and XPS results for the HPC, PANI and HPC/PANI hybrids indicate that PANI is coated on the surface of HPC in the HPC/PANI hybrids which have two different nitrogen groups as a benzenoid amine (-NH-) peak and positively charged nitrogen (N+) peak. The electrochemical performances of the HPC/PANI hybrids were analyzed by performing cyclic voltammetry and galvanostatic charge-discharge tests. The HPC/PANI hybrids showed a better specific capacitance (222 F/g) than HPC (111 F/g) because of effect of pseudocapacitor behavior. In addition, good cycle stabilities were maintained over 1000 cycles.

  7. A Business Case Study of Open Source Software

    DTIC Science & Technology

    2001-07-01

    LinuxPPC LinuxPPC www.linuxppc.com MandrakeSoft Linux -Mandrake www.linux-mandrake.com/ en / CLE Project CLE cle.linux.org.tw/CLE/e_index.shtml Red Hat... en Coyote Linux www2.vortech.net/coyte/coyte.htm MNIS www.mnis.fr Data-Portal www.data-portal.com Mr O’s Linux Emporium www.ouin.com DLX Linux www.wu...1998 1999 Year S h ip m en ts ( in m ill io n s) Source: IDC, 2000. Figure 11. Worldwide New Linux Shipments (Client and Server) 3.2.2 Market

  8. Power-Time Curve Comparison between Weightlifting Derivatives

    PubMed Central

    Suchomel, Timothy J.; Sole, Christopher J.

    2017-01-01

    This study examined the power production differences between weightlifting derivatives through a comparison of power-time (P-t) curves. Thirteen resistance-trained males performed hang power clean (HPC), jump shrug (JS), and hang high pull (HHP) repetitions at relative loads of 30%, 45%, 65%, and 80% of their one repetition maximum (1RM) HPC. Relative peak power (PPRel), work (WRel), and P-t curves were compared. The JS produced greater PPRel than the HPC (p < 0.001, d = 2.53) and the HHP (p < 0.001, d = 2.14). In addition, the HHP PPRel was statistically greater than the HPC (p = 0.008, d = 0.80). Similarly, the JS produced greater WRel compared to the HPC (p < 0.001, d = 1.89) and HHP (p < 0.001, d = 1.42). Furthermore, HHP WRel was statistically greater than the HPC (p = 0.003, d = 0.73). The P-t profiles of each exercise were similar during the first 80-85% of the movement; however, during the final 15-20% of the movement the P-t profile of the JS was found to be greater than the HPC and HHP. The JS produced greater PPRel and WRel compared to the HPC and HHP with large effect size differences. The HHP produced greater PPRel and WRel than the HPC with moderate effect size differences. The JS and HHP produced markedly different P-t profiles in the final 15-20% of the movement compared to the HPC. Thus, these exercises may be superior methods of training to enhance PPRel. The greatest differences in PPRel between the JS and HHP and the HPC occurred at lighter loads, suggesting that loads of 30-45% 1RM HPC may provide the best training stimulus when using the JS and HHP. In contrast, loads ranging 65-80% 1RM HPC may provide an optimal stimulus for power production during the HPC. Key points The JS and HHP exercises produced greater relative peak power and relative work compared to the HPC. Although the power-time curves were similar during the first 80-85% of the movement, the JS and HHP possessed unique power-time characteristics during the final 15-20% of the movement compared to the HPC. The JS and HHP may be effectively implemented to train peak power characteristics, especially using loads ranging from 30-45% of an individual’s 1RM HPC. The HPC may be best implemented using loads ranging from 65-80% of an individual’s 1RM HPC. PMID:28912659

  9. Ventral, but not dorsal, hippocampus inactivation impairs reward memory expression and retrieval in contexts defined by proximal cues.

    PubMed

    Riaz, Sadia; Schumacher, Anett; Sivagurunathan, Seyon; Van Der Meer, Matthijs; Ito, Rutsuko

    2017-07-01

    The hippocampus (HPC) has been widely implicated in the contextual control of appetitive and aversive conditioning. However, whole hippocampal lesions do not invariably impair all forms of contextual processing, as in the case of complex biconditional context discrimination, leading to contention over the exact nature of the contribution of the HPC in contextual processing. Moreover, the increasingly well-established functional dissociation between the dorsal (dHPC) and ventral (vHPC) subregions of the HPC has been largely overlooked in the existing literature on hippocampal-based contextual memory processing in appetitively motivated tasks. Thus, the present study sought to investigate the individual roles of the dHPC and the vHPC in contextual biconditional discrimination (CBD) performance and memory retrieval. To this end, we examined the effects of transient post-acquisition pharmacological inactivation (using a combination of GABA A and GABA B receptor agonists muscimol and baclofen) of functionally distinct subregions of the HPC (CA1/CA3 subfields of the dHPC and vHPC) on CBD memory retrieval. Additional behavioral assays including novelty preference, light-dark box and locomotor activity test were also performed to confirm that the respective sites of inactivation were functionally silent. We observed robust deficits in CBD performance and memory retrieval following inactivation of the vHPC, but not the dHPC. Our data provides novel insight into the differential roles of the ventral and dorsal HPC in reward contextual processing, under conditions in which the context is defined by proximal cues. © 2017 Wiley Periodicals, Inc.

  10. Equivalent cardioprotection induced by ischemic and hypoxic preconditioning.

    PubMed

    Xiang, Xujin; Lin, Haixia; Liu, Jin; Duan, Zeyan

    2013-04-01

    We aimed to compare cardioprotection induced by various hypoxic preconditioning (HPC) and ischemic preconditioning (IPC) protocols. Isolated rat hearts were randomly divided into 7 groups (n = 7 per group) and received 3 or 5 cycles of 3-minute ischemia or hypoxia followed by 3-minute reperfusion (IPC33 or HPC33 or IPC53 or HPC53 group), 3 cycles of 5-minute ischemia or hypoxia followed by 5-minute reperfusion (IPC35 group or HPC35 group), or 30-minute perfusion (ischemic/reperfusion group), respectively. Then all the hearts were subjected to 50-minute ischemia and 120-minute reperfusion. Cardiac function, infarct size, and coronary flow rate (CFR) were evaluated. Recovery of cardiac function and CFR in IPC35, HPC35, and HPC53 groups was significantly improved as compared with I/R group (p < 0.01). There were no significant differences in cardiac function parameters between IPC35 and HPC35 groups. Consistently, infarct size was significantly reduced in IPC35, HPC35, and HPC53 groups compared with ischemic/reperfusion group. Multiple-cycle short duration HPC exerted cardioprotection, which was as powerful as that of IPC. Georg Thieme Verlag KG Stuttgart · New York.

  11. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    NASA Astrophysics Data System (ADS)

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-12-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.

  12. A population-based analysis of Head and Neck hemangiopericytoma.

    PubMed

    Shaigany, Kevin; Fang, Christina H; Patel, Tapan D; Park, Richard Chan; Baredes, Soly; Eloy, Jean Anderson

    2016-03-01

    Hemangiopericytomas (HPC) are tumors that arise from pericytes. Hemangiopericytomas of the head and neck are rare and occur both extracranially and intracranially. This study analyzes the demographic, clinicopathologic, treatment modalities, and survival characteristics of extracranial head and neck hemangiopericytomas (HN-HPC) and compares them to HPCs at other body sites (Other-HPC). The Surveillance, Epidemiology, and End Results (SEER) database (1973-2012) was queried for HN-HPC (121 cases) and Other-HPC (510 cases). Data were analyzed comparatively with respect to various demographic and clinicopathologic factors. Disease-specific survival (DSS) was analyzed using the Kaplan-Meier model. There was no significant difference in age at time of diagnosis between HN-HPC and Other-HPC. Head and neck HPC was most commonly located in the connective and soft tissue (18.4%), followed by the nasal cavity and paranasal sinuses (8.5%). Head and neck HPCs were smaller than Other-HPC (P < 0.0001) and more likely to be a lower histologic grade (P < 0.0097). The primary treatment modality for HN-HPC was surgery alone, used in 55.8% of cases. The 5-, 10-, and 20-year DSS for HN-HPC were 84.0%, 79.4%, and 69.4%, respectfully. Higher histologic grade and the presence of distant metastases were poor prognostic factors for HN-HPC. Head and neck HPCs are rare tumors. This study represents the largest series of HN-HPCs to date. Surgery alone is the primary treatment modality for HN-HPC, with a favorable prognosis. Adjuvant radiotherapy does not appear to confer a survival benefit for any body site. 4. Laryngoscope, 126:643-650, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Reliable High Performance Peta- and Exa-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less

  14. Introduction to LINUX OS for new LINUX users - Basic Information Before Using The Kurucz Codes Under LINUX-.

    NASA Astrophysics Data System (ADS)

    Çay, M. Taşkin

    Recently the ATLAS suite (Kurucz) was ported to LINUX OS (Sbordone et al.). Those users of the suite unfamiliar with LINUX need to know some basic information to use these versions. This paper is a quick overview and introduction to LINUX OS. The reader is highly encouraged to own a book on LINUX OS for comprehensive use. Although the subjects and examples in this paper are for general use, they to help with the installation and running the ATLAS suite.

  15. The clinical phenotype of hereditary versus sporadic prostate cancer: HPC definition revisited.

    PubMed

    Cremers, Ruben G; Aben, Katja K; van Oort, Inge M; Sedelaar, J P Michiel; Vasen, Hans F; Vermeulen, Sita H; Kiemeney, Lambertus A

    2016-07-01

    The definition of hereditary prostate cancer (HPC) is based on family history and age at onset. Intuitively, HPC is a serious subtype of prostate cancer but there are only limited data on the clinical phenotype of HPC. Here, we aimed to compare the prognosis of HPC to the sporadic form of prostate cancer (SPC). HPC patients were identified through a national registry of HPC families in the Netherlands, selecting patients diagnosed from the year 2000 onward (n = 324). SPC patients were identified from the Netherlands Cancer Registry (NCR) between 2003 and 2006 for a population-based study into the genetic susceptibility of PC (n = 1,664). Detailed clinical data were collected by NCR-registrars, using a standardized registration form. Follow-up extended up to the end of 2013. Differences between the groups were evaluated by cross-tabulations and tested for statistical significance while accounting for familial dependency of observations by GEE. Differences in progression-free and overall survival were evaluated using χ(2) testing with GEE in a proportional-hazards model. HPC patients were on average 3 years younger at diagnosis, had lower PSA values, lower Gleason scores, and more often locally confined disease. Of the HPC patients, 35% had high-risk disease (NICE-criteria) versus 51% of the SPC patients. HPC patients were less often treated with active surveillance. Kaplan-Meier 5-year progression-free survival after radical prostatectomy was comparable for HPC (78%) and SPC (74%; P = 0.30). The 5-year overall survival was 85% (95%CI 81-89%) for HPC versus 80% (95%CI 78-82%) for SPC (P = 0.03). HPC has a favorable clinical phenotype but patients more often underwent radical treatment. The major limitation of HPC is the absence of a genetics-based definition of HPC, which may lead to over-diagnosis of PC in men with a family history of prostate cancer. The HPC definition should, therefore, be re-evaluated, aiming at a reduction of over-diagnosis and overtreatment among men with multiple relatives diagnosed with PC. Prostate 76:897-904, 2016. © 2016 The Authors. The Prostate published by Wiley Periodicals, Inc. © 2016 The Authors. The Prostate published by Wiley Periodicals, Inc.

  16. Unipro UGENE: a unified bioinformatics toolkit.

    PubMed

    Okonechnikov, Konstantin; Golosova, Olga; Fursov, Mikhail

    2012-04-15

    Unipro UGENE is a multiplatform open-source software with the main goal of assisting molecular biologists without much expertise in bioinformatics to manage, analyze and visualize their data. UGENE integrates widely used bioinformatics tools within a common user interface. The toolkit supports multiple biological data formats and allows the retrieval of data from remote data sources. It provides visualization modules for biological objects such as annotated genome sequences, Next Generation Sequencing (NGS) assembly data, multiple sequence alignments, phylogenetic trees and 3D structures. Most of the integrated algorithms are tuned for maximum performance by the usage of multithreading and special processor instructions. UGENE includes a visual environment for creating reusable workflows that can be launched on local resources or in a High Performance Computing (HPC) environment. UGENE is written in C++ using the Qt framework. The built-in plugin system and structured UGENE API make it possible to extend the toolkit with new functionality. UGENE binaries are freely available for MS Windows, Linux and Mac OS X at http://ugene.unipro.ru/download.html. UGENE code is licensed under the GPLv2; the information about the code licensing and copyright of integrated tools can be found in the LICENSE.3rd_party file provided with the source bundle.

  17. Energy and technology review

    NASA Astrophysics Data System (ADS)

    Johnson, K. C.

    1991-04-01

    This issue of Energy and Technology Review discusses the various educational programs in which Lawrence Livermore National Laboratory (LLNL) participates or sponsors. LLNL has a long history of fostering educational programs for students from kindergarten through graduate school. A goal is to enhance the teaching of science, mathematics, and technology and thereby assist educational institutions to increase the pool of scientists, engineers, and technicians. LLNL programs described include: (1) contributions to the improvement of U.S. science education; (2) the LESSON program; (3) collaborations with Bay Area Science and Technology Education; (4) project HOPES; (5) lasers and fusion energy education; (6) a curriculum on global climate change; (7) computer and technology instruction at LLNL's Science Education Center; (8) the National Education Supercomputer Program; (9) project STAR; (10) the American Indian Program; (11) LLNL programs with historically Black colleges and Universities; (12) the Undergraduate Summer Institute on Contemporary Topics in Applied Science; (13) the National Physical Science Consortium: A Fellowship Program for Minorities and Women; (14) LLNL's participation with AWU; (15) the apprenticeship programs at LLNL; and (16) the future of LLNL's educational programs. An appendix lists all of LLNL's educational programs and activities. Contacts and their respective telephone numbers are given for all these programs and activities.

  18. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    NASA Astrophysics Data System (ADS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.

  19. Frequency-specific hippocampal-prefrontal interactions during associative learning

    PubMed Central

    Brincat, Scott L.; Miller, Earl K.

    2015-01-01

    Much of our knowledge of the world depends on learning associations (e.g., face-name), for which the hippocampus (HPC) and prefrontal cortex (PFC) are critical. HPC-PFC interactions have rarely been studied in monkeys, whose cognitive/mnemonic abilities are akin to humans. Here, we show functional differences and frequency-specific interactions between HPC and PFC of monkeys learning object-pair associations, an animal model of human explicit memory. PFC spiking activity reflected learning in parallel with behavioral performance, while HPC neurons reflected feedback about whether trial-and-error guesses were correct or incorrect. Theta-band HPC-PFC synchrony was stronger after errors, was driven primarily by PFC to HPC directional influences, and decreased with learning. In contrast, alpha/beta-band synchrony was stronger after correct trials, was driven more by HPC, and increased with learning. Rapid object associative learning may occur in PFC, while HPC may guide neocortical plasticity by signaling success or failure via oscillatory synchrony in different frequency bands. PMID:25706471

  20. Avi Purkayastha | NREL

    Science.gov Websites

    Austin, from 2001 to 2007. There he was principal in HPC applications and user support, as well as in research and development in large-scale scientific applications and different HPC systems and technologies Interests HPC applications performance and optimizations|HPC systems and accelerator technologies|Scientific

  1. AIRE-Linux

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi

    2015-08-01

    AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.

  2. Proteomic analysis of cPKCβII-interacting proteins involved in HPC-induced neuroprotection against cerebral ischemia of mice.

    PubMed

    Bu, Xiangning; Zhang, Nan; Yang, Xuan; Liu, Yanyan; Du, Jianli; Liang, Jing; Xu, Qunyuan; Li, Junfa

    2011-04-01

    Hypoxic preconditioning (HPC) initiates intracellular signaling pathway to provide protection against subsequent cerebral ischemic injuries, and its mechanism may provide molecular targets for therapy in stroke. According to our study of conventional protein kinase C βII (cPKCβII) activation in HPC, the role of cPKCβII in HPC-induced neuroprotection and its interacting proteins were determined in this study. The autohypoxia-induced HPC and middle cerebral artery occlusion (MCAO)-induced cerebral ischemia mouse models were prepared as reported. We found that HPC reduced 6 h MCAO-induced neurological deficits, infarct volume, edema ratio and cell apoptosis in peri-infarct region (penumbra), but cPKCβII inhibitors Go6983 and LY333531 blocked HPC-induced neuroprotection. Proteomic analysis revealed that the expression of four proteins in cytosol and eight proteins in particulate fraction changed significantly among 49 identified cPKCβII-interacting proteins in cortex of HPC mice. In addition, HPC could inhibit the decrease of phosphorylated collapsin response mediator protein-2 (CRMP-2) level and increase of CRMP-2 breakdown product. TAT-CRMP-2 peptide, which prevents the cleavage of endogenous CRMP-2, could inhibit CRMP-2 dephosphorylation and proteolysis as well as the infarct volume of 6 h MCAO mice. This study is the first to report multiple cPKCβII-interacting proteins in HPC mouse brain and the role of cPKCβII-CRMP-2 in HPC-induced neuroprotection against early stages of ischemic injuries in mice. © 2011 The Authors. Journal of Neurochemistry © 2011 International Society for Neurochemistry.

  3. Malignant melanoma slide review project: Patients from non-Kaiser hospitals in the San Francisco Bay Area. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, P.

    This project was initiated, in response to concerns that the observed excess of malignant melanoma among employees of Lawrence Livermore National Laboratory (LLNL) might reflect the incidence of disease diagnostically different than that observed in the general population. LLNL sponsored a slide review project, inviting leading dermatopathology experts to independently evaluate pathology slides from LLNL employees diagnosed with melanoma and those from a matched sample of Bay Area melanoma patients who did not work at the LLNL. The study objectives were to: Identify all 1969--1984 newly diagnosed cases of malignant melanoma among LLNL employees resident in the San Francisco-Oakland Metropolitanmore » Statistical Area, and diagnosed at facilities other than Kaiser Permanente; identify a comparison series of melanoma cases also diagnosed between 1969--1984 in non-Kaiser facilities, and matched as closely as possible to the LLNL case series by gender, race, age at diagnosis, year of diagnosis, and hospital of diagnosis; obtain pathology slides for the identified (LLNL) case and (non-LLNL) comparison patients for review by the LLNL-invited panel of dermatopathology experts; and to compare the pathologic characteristics of the case and comparison melanoma patients, as recorded by the dermatopathology panel.« less

  4. 75 FR 59067 - Airworthiness Directives; International Aero Engines AG V2500-A1, V2522-A5, V2524-A5, V2525-D5...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-27

    ... plated nuts attaching the HPC stage 3 to 8 drum to the HPC stage 9 to 12 drum, removal of silver residue... plated nuts attaching the HPC stage 3 to 8 drum to the HPC stage 9 to 12 drum, removal of silver residue... AD, removal from service of the fully silver plated nuts attaching the HPC stage 3 to 8 drum to the...

  5. Hippocampal damage causes retrograde but not anterograde memory loss for context fear discrimination in rats.

    PubMed

    Lee, Justin Q; Sutherland, Robert J; McDonald, Robert J

    2017-09-01

    There is a substantial body of evidence that the hippocampus (HPC) plays and essential role in context discrimination in rodents. Studies reporting anterograde amnesia (AA) used repeated, alternating, distributed conditioning and extinction sessions to measure context fear discrimination. In addition, there is uncertainty about the extent of damage to the HPC. Here, we induced conditioned fear prior to discrimination tests and rats sustained extensive, quantified pre- or post-training HPC damage. Unlike previous work, we found that extensive HPC damage spares context discrimination, we observed no AA. There must be a non-HPC system that can acquire long-term memories that support context fear discrimination. Post-training HPC damage caused retrograde amnesia (RA) for context discrimination, even when rats are fear conditioned for multiple sessions. We discuss the implications of these findings for understanding the role of HPC in long-term memory. © 2017 Wiley Periodicals, Inc.

  6. Self-desiccation mechanism of high-performance concrete.

    PubMed

    Yang, Quan-Bing; Zhang, Shu-Qing

    2004-12-01

    Investigations on the effects of W/C ratio and silica fume on the autogenous shrinkage and internal relative humidity of high performance concrete (HPC), and analysis of the self-desiccation mechanisms of HPC showed that the autogenous shrinkage and internal relative humidity of HPC increases and decreases with the reduction of W/C respectively; and that these phenomena were amplified by the addition of silica fume. Theoretical analyses indicated that the reduction of RH in HPC was not due to shortage of water, but due to the fact that the evaporable water in HPC was not evaporated freely. The reduction of internal relative humidity or the so-called self-desiccation of HPC was chiefly caused by the increase in mole concentration of soluble ions in HPC and the reduction of pore size or the increase in the fraction of micro-pore water in the total evaporable water (T(r)/T(te) ratio).

  7. HPC Software Stack Testing Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garvey, Cormac

    The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).

  8. Exposure to high ambient temperatures alters embryology in rabbits

    NASA Astrophysics Data System (ADS)

    García, M. L.; Argente, M. J.

    2017-09-01

    High ambient temperatures are a determining factor in the deterioration of embryo quality and survival in mammals. The aim of this study was to evaluate the effect of heat stress on embryo development, embryonic size and size of the embryonic coats in rabbits. A total of 310 embryos from 33 females in thermal comfort zone and 264 embryos of 28 females in heat stress conditions were used in the experiment. The traits studied were ovulation rate, percentage of total embryos, percentage of normal embryos, embryo area, zona pellucida thickness and mucin coat thickness. Traits were measured at 24 and 48 h post-coitum (hpc); mucin coat thickness was only measured at 48 hpc. The embryos were classified as zygotes or two-cell embryos at 24 hpc, and 16-cells or early morulae at 48 hpc. The ovulation rate was one oocyte lower in heat stress conditions than in thermal comfort. Percentage of normal embryos was lower in heat stress conditions at 24 hpc (17.2%) and 48 hpc (13.2%). No differences in percentage of zygotes or two-cell embryos were found at 24 hpc. The embryo development and area was affected by heat stress at 48 hpc (10% higher percentage of 16-cells and 883 μm2 smaller, respectively). Zona pellucida was thicker under thermal stress at 24 hpc (1.2 μm) and 48 hpc (1.5 μm). No differences in mucin coat thickness were found. In conclusion, heat stress appears to alter embryology in rabbits.

  9. Efficient Comparison between Windows and Linux Platform Applicable in a Virtual Architectural Walkthrough Application

    NASA Astrophysics Data System (ADS)

    Thubaasini, P.; Rusnida, R.; Rohani, S. M.

    This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.

  10. Sacro-anterior haemangiopericytoma: a case report

    PubMed Central

    Ge, Xiu-Hong; Liu, Shuai-Shuai; Shan, Hu-Sheng; Wang, Zhi-Min; Li, Qian-Wen

    2014-01-01

    Haemangiopericytoma (HPC) is a rare vascular tumor with borderline malignancy, considerable histological variability, and unpredictable clinical and biological behavior. HPC can present a diagnostic challenge because of its indeterminate clinical, radiological, and pathological features. HPC generally presents in adulthood and is equally frequent in both sexes. HPC can arise in any site in the body as a slowly growing and painless mass. The precise cell type origin of HPC is uncertain. One third of HPCs occur in the head and neck areas. Exceptional cases of hemangioblastoma arising outside the head and neck areas have been reported, but little is known about their clinicopathologic and immunohistochemical features. This study reports on a case of a large sacro-anterior HPC in a 65-year-old male. PMID:25009757

  11. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    NASA Astrophysics Data System (ADS)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  12. Summary Report of Summer 2009 NGSI Human Capital Development Efforts at Lawrence Livermore National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dougan, A; Dreicer, M; Essner, J

    2009-11-16

    In 2009, Lawrence Livermore National Laboratory (LLNL) engaged in several activities to support NA-24's Next Generation Safeguards Initiative (NGSI). This report outlines LLNL's efforts to support Human Capital Development (HCD), one of five key components of NGSI managed by Dunbar Lockwood in the Office of International Regimes and Agreements (NA-243). There were five main LLNL summer safeguards HCD efforts sponsored by NGSI: (1) A joint Monterey Institute of International Studies/Center for Nonproliferation Studies-LLNL International Safeguards Policy and Information Analysis Course; (2) A Summer Safeguards Policy Internship Program at LLNL; (3) A Training in Environmental Sample Analysis for IAEA Safeguards Internship;more » (4) Safeguards Technology Internships; and (5) A joint LLNL-INL Summer Safeguards Lecture Series. In this report, we provide an overview of these five initiatives, an analysis of lessons learned, an update on the NGSI FY09 post-doc, and an update on students who participated in previous NGSI-sponsored LLNL safeguards HCD efforts.« less

  13. Joint FAM/Line Management Assessment Report on LLNL Machine Guarding Safety Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, J. J.

    2016-07-19

    The LLNL Safety Program for Machine Guarding is implemented to comply with requirements in the ES&H Manual Document 11.2, "Hazards-General and Miscellaneous," Section 13 Machine Guarding (Rev 18, issued Dec. 15, 2015). The primary goal of this LLNL Safety Program is to ensure that LLNL operations involving machine guarding are managed so that workers, equipment and government property are adequately protected. This means that all such operations are planned and approved using the Integrated Safety Management System to provide the most cost effective and safest means available to support the LLNL mission.

  14. 2017 LLNL Nuclear Forensics Summer Internship Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavarin, Mavrik

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Internship Program (NFSIP) is designed to give graduate students an opportunity to come to LLNL for 8-10 weeks of hands-on research. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students can also meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  15. Differential Age-Related Changes in Structural Covariance Networks of Human Anterior and Posterior Hippocampus.

    PubMed

    Li, Xinwei; Li, Qiongling; Wang, Xuetong; Li, Deyu; Li, Shuyu

    2018-01-01

    The hippocampus plays an important role in memory function relying on information interaction between distributed brain areas. The hippocampus can be divided into the anterior and posterior sections with different structure and function along its long axis. The aim of this study is to investigate the effects of normal aging on the structural covariance of the anterior hippocampus (aHPC) and the posterior hippocampus (pHPC). In this study, 240 healthy subjects aged 18-89 years were selected and subdivided into young (18-23 years), middle-aged (30-58 years), and older (61-89 years) groups. The aHPC and pHPC was divided based on the location of uncal apex in the MNI space. Then, the structural covariance networks were constructed by examining their covariance in gray matter volumes with other brain regions. Finally, the influence of age on the structural covariance of these hippocampal sections was explored. We found that the aHPC and pHPC had different structural covariance patterns, but both of them were associated with the medial temporal lobe and insula. Moreover, both increased and decreased covariances were found with the aHPC but only increased covariance was found with the pHPC with age ( p < 0.05, family-wise error corrected). These decreased connections occurred within the default mode network, while the increased connectivity mainly occurred in other memory systems that differ from the hippocampus. This study reveals different age-related influence on the structural networks of the aHPC and pHPC, providing an essential insight into the mechanisms of the hippocampus in normal aging.

  16. Low-viscosity hydroxypropylcellulose (HPC) grades SL and SSL: versatile pharmaceutical polymers for dissolution enhancement, controlled release, and pharmaceutical processing.

    PubMed

    Sarode, Ashish; Wang, Peng; Cote, Catherine; Worthen, David R

    2013-03-01

    Hydroxypropylcellulose (HPC)-SL and -SSL, low-viscosity hydroxypropylcellulose polymers, are versatile pharmaceutical excipients. The utility of HPC polymers was assessed for both dissolution enhancement and sustained release of pharmaceutical drugs using various processing techniques. The BCS class II drugs carbamazepine (CBZ), hydrochlorthiazide, and phenytoin (PHT) were hot melt mixed (HMM) with various polymers. PHT formulations produced by solvent evaporation (SE) and ball milling (BM) were prepared using HPC-SSL. HMM formulations of BCS class I chlorpheniramine maleate (CPM) were prepared using HPC-SL and -SSL. These solid dispersions (SDs) manufactured using different processes were evaluated for amorphous transformation and dissolution characteristics. Drug degradation because of HMM processing was also assessed. Amorphous conversion using HMM could be achieved only for relatively low-melting CBZ and CPM. SE and BM did not produce amorphous SDs of PHT using HPC-SSL. Chemical stability of all the drugs was maintained using HPC during the HMM process. Dissolution enhancement was observed in HPC-based HMMs and compared well to other polymers. The dissolution enhancement of PHT was in the order of SE>BM>HMM>physical mixtures, as compared to the pure drug, perhaps due to more intimate mixing that occurred during SE and BM than in HMM. Dissolution of CPM could be significantly sustained in simulated gastric and intestinal fluids using HPC polymers. These studies revealed that low-viscosity HPC-SL and -SSL can be employed to produce chemically stable SDs of poorly as well as highly water-soluble drugs using various pharmaceutical processes in order to control drug dissolution.

  17. Lawrence Livermore National Laboratory Environmental Report 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H E; Bertoldo, N A; Campbell, C G

    The purposes of the Lawrence Livermore National Laboratory Environmental Report 2010 are to record Lawrence Livermore National Laboratory's (LLNL's) compliance with environmental standards and requirements, describe LLNL's environmental protection and remediation programs, and present the results of environmental monitoring at the two LLNL sites - the Livermore site and Site 300. The report is prepared for the U.S. Department of Energy (DOE) by LLNL's Environmental Protection Department. Submittal of the report satisfies requirements under DOE Order 231.1A, Environmental Safety and Health Reporting, and DOE Order 5400.5, Radiation Protection of the Public and Environment. The report is distributed electronically and ismore » available at https://saer.llnl.gov/, the website for the LLNL annual environmental report. Previous LLNL annual environmental reports beginning in 1994 are also on the website. Some references in the electronic report text are underlined, which indicates that they are clickable links. Clicking on one of these links will open the related document, data workbook, or website that it refers to. The report begins with an executive summary, which provides the purpose of the report and an overview of LLNL's compliance and monitoring results. The first three chapters provide background information: Chapter 1 is an overview of the location, meteorology, and hydrogeology of the two LLNL sites; Chapter 2 is a summary of LLNL's compliance with environmental regulations; and Chapter 3 is a description of LLNL's environmental programs with an emphasis on the Environmental Management System including pollution prevention. The majority of the report covers LLNL's environmental monitoring programs and monitoring data for 2010: effluent and ambient air (Chapter 4); waters, including wastewater, storm water runoff, surface water, rain, and groundwater (Chapter 5); and terrestrial, including soil, sediment, vegetation, foodstuff, ambient radiation, and special status wildlife and plants (Chapter 6). Complete monitoring data, which are summarized in the body of the report, are provided in Appendix A. The remaining three chapters discuss the radiological impact on the public from LLNL operations (Chapter 7), LLNL's groundwater remediation program (Chapter 8), and quality assurance for the environmental monitoring programs (Chapter 9). The report uses System International units, consistent with the federal Metric Conversion Act of 1975 and Executive Order 12770, Metric Usage in Federal Government Programs (1991). For ease of comparison to environmental reports issued prior to 1991, dose values and many radiological measurements are given in both metric and U.S. customary units. A conversion table is provided in the glossary.« less

  18. Running ANSYS Fluent on the WinHPC System | High-Performance Computing |

    Science.gov Websites

    . If you don't have one, see WinHPC system user basics. Check License Use Status Start > All Jason Lustbader. Run Using Fluent Launcher Start Fluent launcher by opening: Start > All Programs > . Available node groups can be found from HPC Job Manager. Start > All Programs > Microsoft HPC Pack

  19. DOE Centers of Excellence Performance Portability Meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neely, J. R.

    2016-04-21

    Performance portability is a phrase often used, but not well understood. The DOE is deploying systems at all of the major facilities across ASCR and ASC that are forcing application developers to confront head-on the challenges of running applications across these diverse systems. With GPU-based systems at the OLCF and LLNL, and Phi-based systems landing at NERSC, ACES (LANL/SNL), and the ALCF – the issue of performance portability is confronting the DOE mission like never before. A new best practice in the DOE is to include “Centers of Excellence” with each major procurement, with a goal of focusing efforts onmore » preparing key applications to be ready for the systems coming to each site, and engaging the vendors directly in a “shared fate” approach to ensuring success. While each COE is necessarily focused on a particular deployment, applications almost invariably must be able to run effectively across the entire DOE HPC ecosystem. This tension between optimizing performance for a particular platform, while still being able to run with acceptable performance wherever the resources are available, is the crux of the challenge we call “performance portability”. This meeting was an opportunity to bring application developers, software providers, and vendors together to discuss this challenge and begin to chart a path forward.« less

  20. Directional hippocampal-prefrontal interactions during working memory.

    PubMed

    Liu, Tiaotiao; Bai, Wenwen; Xia, Mi; Tian, Xin

    2018-02-15

    Working memory refers to a system that is essential for performing complex cognitive tasks such as reasoning, comprehension and learning. Evidence shows that hippocampus (HPC) and prefrontal cortex (PFC) play important roles in working memory. The HPC-PFC interaction via theta-band oscillatory synchronization is critical for successful execution of working memory. However, whether one brain region is leading or lagging relative to another is still unclear. Therefore, in the present study, we simultaneously recorded local field potentials (LFPs) from rat ventral hippocampus (vHPC) and medial prefrontal cortex (mPFC) and while the rats performed a Y-maze working memory task. We then applied instantaneous amplitudes cross-correlation method to calculate the time lag between PFC and vHPC to explore the functional dynamics of the HPC-PFC interaction. Our results showed a strong lead from vHPC to mPFC preceded an animal's correct choice during the working memory task. These findings suggest the vHPC-leading interaction contributes to the successful execution of working memory. Copyright © 2017. Published by Elsevier B.V.

  1. Gut vagal sensory signaling regulates hippocampus function through multi-order pathways.

    PubMed

    Suarez, Andrea N; Hsu, Ted M; Liu, Clarissa M; Noble, Emily E; Cortella, Alyssa M; Nakamoto, Emily M; Hahn, Joel D; de Lartigue, Guillaume; Kanoski, Scott E

    2018-06-05

    The vagus nerve is the primary means of neural communication between the gastrointestinal (GI) tract and the brain. Vagally mediated GI signals activate the hippocampus (HPC), a brain region classically linked with memory function. However, the endogenous relevance of GI-derived vagal HPC communication is unknown. Here we utilize a saporin (SAP)-based lesioning procedure to reveal that selective GI vagal sensory/afferent ablation in rats impairs HPC-dependent episodic and spatial memory, effects associated with reduced HPC neurotrophic and neurogenesis markers. To determine the neural pathways connecting the gut to the HPC, we utilize monosynaptic and multisynaptic virus-based tracing methods to identify the medial septum as a relay connecting the medial nucleus tractus solitarius (where GI vagal afferents synapse) to dorsal HPC glutamatergic neurons. We conclude that endogenous GI-derived vagal sensory signaling promotes HPC-dependent memory function via a multi-order brainstem-septal pathway, thereby identifying a previously unknown role for the gut-brain axis in memory control.

  2. [Biological behavior of hypopharyngeal carcinoma].

    PubMed

    Zhou, L X

    1997-01-01

    Hypopharyngeal squamous cell carcinomas (HPC) has an extremely poor prognosis. Characteristics of cell lines of head and neck squamous cell carcinomas including HPC were studied by various methods, e.g., chemosensitivity test and the immunohistochemistry staining method, to determine whether this poor prognosis is due to the biological behavior of this cancer. An HPC cell line was found to be resistant to anti tumor drugs, i.e., PEP, MTX and CPM and moderately sensitive to CDDP, 5-FU and ADM. Thermoresistance to hyperthermatic treatment and weak expression of ICAM-1 on the HPC cell line were observed. DNA synthesis by the HPC cell line was induced by stimulation with a low concentration of EGF and the amount of EGFR on these HPC cells was very high. In addition, cyclinD1 overexpression was found in the HPC cell line. Based on the above findings, further analysis of hypopharyngeal carcinoma cells and the development of a new treatment modality to control tumor growth and metastatic factors influencing the poor outcome are necessary to improve the prognosis of this cancer.

  3. Shifter: Containers for HPC

    NASA Astrophysics Data System (ADS)

    Gerhardt, Lisa; Bhimji, Wahid; Canon, Shane; Fasel, Markus; Jacobsen, Doug; Mustafa, Mustafa; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    Bringing HEP computing to HPC can be difficult. Software stacks are often very complicated with numerous dependencies that are difficult to get installed on an HPC system. To address this issue, NERSC has created Shifter, a framework that delivers Docker-like functionality to HPC. It works by extracting images from native formats and converting them to a common format that is optimally tuned for the HPC environment. We have used Shifter to deliver the CVMFS software stack for ALICE, ATLAS, and STAR on the supercomputers at NERSC. As well as enabling the distribution multi-TB sized CVMFS stacks to HPC, this approach also offers performance advantages. Software startup times are significantly reduced and load times scale with minimal variation to 1000s of nodes. We profile several successful examples of scientists using Shifter to make scientific analysis easily customizable and scalable. We will describe the Shifter framework and several efforts in HEP and NP to use Shifter to deliver their software on the Cori HPC system.

  4. Preparing a scientific manuscript in Linux: Today's possibilities and limitations.

    PubMed

    Tchantchaleishvili, Vakhtang; Schmitto, Jan D

    2011-10-22

    Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.

  5. ALDH1 is an immunohistochemical diagnostic marker for solitary fibrous tumours and haemangiopericytomas of the meninges emerging from gene profiling study

    PubMed Central

    2013-01-01

    Background Solitary Fibrous Tumours (SFT) and haemangiopericytomas (HPC) are rare meningeal tumours that have to be distinguished from meningiomas and more rarely from synovial sarcomas. We recently found that ALDH1A1 was overexpressed in SFT and HPC as compared to soft tissue sarcomas. Using whole-genome DNA microarrays, we defined the gene expression profiles of 16 SFT/HPC (9 HPC and 7 SFT). Expression profiles were compared to publicly available expression profiles of additional SFT or HPC, meningiomas and synovial sarcomas. We also performed an immunohistochemical (IHC) study with anti-ALDH1 and anti-CD34 antibodies on Tissue Micro-Arrays including 38 SFT (25 meningeal and 13 extrameningeal), 55 meningeal haemangiopericytomas (24 grade II, 31 grade III), 163 meningiomas (86 grade I, 62 grade II, 15 grade III) and 98 genetically confirmed synovial sarcomas. Results ALDH1A1 gene was overexpressed in SFT/HPC, as compared to meningiomas and synovial sarcomas. These findings were confirmed at the protein level. 84% of the SFT and 85.4% of the HPC were positive with anti-ALDH1 antibody, while only 7.1% of synovial sarcomas and 1.2% of meningiomas showed consistent expression. Positivity was usually more diffuse in SFT/HPC compared to other tumours with more than 50% of tumour cells immunostained in 32% of SFT and 50.8% of HPC. ALDH1 was a sensitive and specific marker for the diagnosis of SFT (SE = 84%, SP = 98.8%) and HPC (SE = 84.5%, SP = 98.7%) of the meninges. In association with CD34, ALDH1 expression had a specificity and positive predictive value of 100%. Conclusion We show that ALDH1, a stem cell marker, is an accurate diagnostic marker for SFT and HPC, which improves the diagnostic value of CD34. ALDH1 could also be a new therapeutic target for these tumours which are not sensitive to conventional chemotherapy. PMID:24252471

  6. TICK: Transparent Incremental Checkpointing at Kernel Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  7. 2016 LLNL Nuclear Forensics Summer Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavarin, Mavrik

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Program is designed to give graduate students an opportunity to come to LLNL for 8–10 weeks for a hands-on research experience. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students also have the opportunity to meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  8. Potential performance bottleneck in Linux TCP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wenji; Crawford, Matt; /Fermilab

    2006-12-01

    TCP is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system tomore » its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modeling and practical experiments; finally we propose and test one possible solution to resolve this performance bottleneck in Linux TCP.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention onmore » elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to help with this issue, which are a particular instances of the more general challenge of efficient host/guest IO that is the focus of interfaces like virtio. A collection of bridging technologies have been identified in Chapter 4, which can be helpful to overcome the limitations and challenges of supporting efficient storage for secure enclaves. The synthesis of native filesystem security mechanisms and bridging technologies led to an isolation-centric storage architecture that is proposed in Chapter 5, which leverages isolation mechanisms from different layers to facilitate secure storage for an enclave. Recommendations: The following highlights recommendations from the investigations done thus far. - The Lustre filesystem offers excellent performance but does not support some security related features, e.g., encryption, that are included in GPFS. If encryption is of paramount importance, then GPFS may be a more suitable choice. - There are several possible Lustre related enhancements that may provide functionality of use for secure-enclaves. However, since these features are not currently integrated, the use of Lustre as a secure storage system may require more direct involvement (support). (*The network that connects the storage subsystem and users, e.g., Lustre s LNET.) - The use of OpenStack with GPFS will be more streamlined than with Lustre, as there are available drivers for GPFS. - The Manilla project offers Filesystem as a Service for OpenStack and is worth further investigation. Manilla has some support for GPFS. - The proposed Lustre enhancement of Dynamic-LNET should be further investigated to provide more dynamic changes to the storage network which could be used to isolate hosts and their tenants. - The Linux namespaces offer a good solution for creating efficient restrictions to shared HPC filesystems. However, we still need to conduct a thorough round of storage/filesystem benchmarks. - Vendor products should be more closely reviewed, possibly to include evaluation of performance/protection of select products. (Note, we are investigation the option of evaluating equipment from Seagate/Xyratex.) Outline: The remainder of this report is structured as follows: - Section 1: Describes the growing importance of secure storage architectures and highlights some challenges for HPC. - Section 2: Provides background information on HPC storage architectures, relevant supporting technologies for secure storage and details on OpenStack components related to storage. Note, that background material on HPC storage architectures in this chapter can be skipped if the reader is already familiar with Lustre and GPFS. - Section 3: A review of protection mechanisms in two HPC filesystems; details about available isolation, authentication/authorization and performance capabilities are discussed. - Section 4: Describe technologies that can be used to bridge gaps in HPC storage and filesystems to facilitate...« less

  10. The impact of α-Lipoic acid on cell viability and expression of nephrin and ZNF580 in normal human podocytes.

    PubMed

    Leppert, Ulrike; Gillespie, Allan; Orphal, Miriam; Böhme, Karen; Plum, Claudia; Nagorsen, Kaj; Berkholz, Janine; Kreutz, Reinhold; Eisenreich, Andreas

    2017-09-05

    Human podocytes (hPC) are essential for maintaining normal kidney function and dysfunction or loss of hPC play a pivotal role in the manifestation and progression of chronic kidney diseases including diabetic nephropathy. Previously, α-Lipoic acid (α-LA), a licensed drug for treatment of diabetic neuropathy, was shown to exhibit protective effects on diabetic nephropathy in vivo. However, the effect of α-LA on hPC under non-diabetic conditions is unknown. Therefore, we analyzed the impact of α-LA on cell viability and expression of nephrin and zinc finger protein 580 (ZNF580) in normal hPC in vitro. Protein analyses were done via Western blot techniques. Cell viability was determined using a functional assay. hPC viability was dynamically modulated via α-LA stimulation in a concentration-dependent manner. This was associated with reduced nephrin and ZNF580 expression and increased nephrin phosphorylation in normal hPC. Moreover, α-LA reduced nephrin and ZNF580 protein expression via 'kappa-light-chain-enhancer' of activated B-cells (NF-κB) inhibition. These data demonstrate that low α-LA had no negative influence on hPC viability, whereas, high α-LA concentrations induced cytotoxic effects on normal hPC and reduced nephrin and ZNF580 expression via NF-κB inhibition. These data provide first novel information about potential cytotoxic effects of α-LA on hPC under non-diabetic conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Examination of Calcium Silicate Cements with Low-Viscosity Methyl Cellulose or Hydroxypropyl Cellulose Additive.

    PubMed

    Baba, Toshiaki; Tsujimoto, Yasuhisa

    2016-01-01

    The purpose of this study was to improve the operability of calcium silicate cements (CSCs) such as mineral trioxide aggregate (MTA) cement. The flow, working time, and setting time of CSCs with different compositions containing low-viscosity methyl cellulose (MC) or hydroxypropyl cellulose (HPC) additive were examined according to ISO 6876-2012; calcium ion release analysis was also conducted. MTA and low-heat Portland cement (LPC) including 20% fine particle zirconium oxide (ZO group), LPC including zirconium oxide and 2 wt% low-viscosity MC (MC group), and HPC (HPC group) were tested. MC and HPC groups exhibited significantly higher flow values and setting times than other groups ( p < 0.05). Additionally, flow values of these groups were higher than the ISO 6876-2012 reference values; furthermore, working times were over 10 min. Calcium ion release was retarded with ZO, MC, and HPC groups compared with MTA. The concentration of calcium ions was decreased by the addition of the MC or HPC group compared with the ZO group. When low-viscosity MC or HPC was added, the composition of CSCs changed, thus fulfilling the requirements for use as root canal sealer. Calcium ion release by CSCs was affected by changing the CSC composition via the addition of MC or HPC.

  12. Examination of Calcium Silicate Cements with Low-Viscosity Methyl Cellulose or Hydroxypropyl Cellulose Additive

    PubMed Central

    Tsujimoto, Yasuhisa

    2016-01-01

    The purpose of this study was to improve the operability of calcium silicate cements (CSCs) such as mineral trioxide aggregate (MTA) cement. The flow, working time, and setting time of CSCs with different compositions containing low-viscosity methyl cellulose (MC) or hydroxypropyl cellulose (HPC) additive were examined according to ISO 6876-2012; calcium ion release analysis was also conducted. MTA and low-heat Portland cement (LPC) including 20% fine particle zirconium oxide (ZO group), LPC including zirconium oxide and 2 wt% low-viscosity MC (MC group), and HPC (HPC group) were tested. MC and HPC groups exhibited significantly higher flow values and setting times than other groups (p < 0.05). Additionally, flow values of these groups were higher than the ISO 6876-2012 reference values; furthermore, working times were over 10 min. Calcium ion release was retarded with ZO, MC, and HPC groups compared with MTA. The concentration of calcium ions was decreased by the addition of the MC or HPC group compared with the ZO group. When low-viscosity MC or HPC was added, the composition of CSCs changed, thus fulfilling the requirements for use as root canal sealer. Calcium ion release by CSCs was affected by changing the CSC composition via the addition of MC or HPC. PMID:27981048

  13. Environmental Report 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallegos, G; Bertoldo, N A; Campbell, C G

    The purposes of the Lawrence Livermore National Laboratory Environmental Report 2008 are to record Lawrence Livermore National Laboratory's (LLNL's) compliance with environmental standards and requirements, describe LLNL's environmental protection and remediation programs, and present the results of environmental monitoring at the two LLNL sites - the Livermore site and Site 300. The report is prepared for the U.S. Department of Energy (DOE) by LLNL's Environmental Protection Department. Submittal of the report satisfies requirements under DOE Order 231.1A, Environmental Safety and Health Reporting, and DOE Order 5400.5, Radiation Protection of the Public and Environment. The report is distributed electronically and ismore » available at https://saer.lln.gov/, the website for the LLNL annual environmental report. Previous LLNL annual environmental reports beginning in 1994 are also on the website. Some references in the electronic report text are underlined, which indicates that they are clickable links. Clicking on one of these links will open the related document, data workbook, or website that it refers to. The report begins with an executive summary, which provides the purpose of the report and an overview of LLNL's compliance and monitoring results. The first three chapters provide background information: Chapter 1 is an overview of the location, meteorology, and hydrogeology of the two LLNL sites; Chapter 2 is a summary of LLNL's compliance with environmental regulations; and Chapter 3 is a description of LLNL's environmental programs with an emphasis on the Environmental Management System including pollution prevention. The majority of the report covers LLNL's environmental monitoring programs and monitoring data for 2008: effluent and ambient air (Chapter 4); waters, including wastewater, storm water runoff, surface water, rain, and groundwater (Chapter 5); and terrestrial, including soil, sediment, vegetation, foodstuff, ambient radiation, and special status wildlife and plants (Chapter 6). Complete monitoring data, which are summarized in the body of the report, are provided in Appendix A. The remaining three chapters discuss the radiological impact on the public from LLNL operations (Chapter 7), LLNL's groundwater remediation program (Chapter 8), and quality assurance for the environmental monitoring programs (Chapter 9). The report uses Systeme International units, consistent with the federal Metric Conversion Act of 1975 and Executive Order 12770, Metric Usage in Federal Government Programs (1991). For ease of comparison to environmental reports issued prior to 1991, dose values and many radiological measurements are given in both metric and U.S. customary units. A conversion table is provided in the glossary. The report is the responsibility of LLNL's Environmental Protection Department. Monitoring data were obtained through the combined efforts of the Environmental Protection Department; Environmental Restoration Department; Physical and Life Sciences Environmental Monitoring Radiation Laboratory; and the Hazards Control Department.« less

  14. Preparing a scientific manuscript in Linux: Today's possibilities and limitations

    PubMed Central

    2011-01-01

    Background Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Findings Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux. PMID:22018246

  15. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  16. Environmental Report 2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathews, S; Gallegos, G; Berg, L L

    2008-09-24

    The purposes of the 'Lawrence Livermore National Laboratory Environmental Report 2007' are to record Lawrence Livermore National Laboratory's (LLNL's) compliance with environmental standards and requirements, describe LLNL's environmental protection and remediation programs, and present the results of environmental monitoring at the two LLNL sites--the Livermore site and Site 300. The report is prepared for the U.S. Department of Energy (DOE) by LLNL's Environmental Protection Department. Submittal of the report satisfies requirements under DOE Order 231.1A, Environmental Safety and Health Reporting, and DOE Order 5400.5, Radiation Protection of the Public and Environment. The report is distributed electronically and is available atmore » https://saer.lln.gov/, the website for the LLNL annual environmental report. Previous LLNL annual environmental reports beginning in 1994 are also on the website. Some references in the electronic report text are underlined, which indicates that they are clickable links. Clicking on one of these links will open the related document, data workbook, or website that it refers to. The report begins with an executive summary, which provides the purpose of the report and an overview of LLNL's compliance and monitoring results. The first three chapters provide background information: Chapter 1 is an overview of the location, meteorology, and hydrogeology of the two LLNL sites; Chapter 2 is a summary of LLNL's compliance with environmental regulations; and Chapter 3 is a description of LLNL's environmental programs with an emphasis on the Environmental Management System including pollution prevention. The majority of the report covers LLNL's environmental monitoring programs and monitoring data for 2007: effluent and ambient air (Chapter 4); waters, including wastewater, storm water runoff, surface water, rain, and groundwater (Chapter 5); and terrestrial, including soil, sediment, vegetation, foodstuff, ambient radiation, and special status wildlife and plants (Chapter 6). Complete monitoring data, which are summarized in the body of the report, are provided in Appendix A. The remaining three chapters discuss the radiological impact on the public from LLNL operations (Chapter 7), LLNL's groundwater remediation program (Chapter 8), and quality assurance for the environmental monitoring programs (Chapter 9). The report uses Systeme International units, consistent with the federal Metric Conversion Act of 1975 and Executive Order 12770, Metric Usage in Federal Government Programs (1991). For ease of comparison to environmental reports issued prior to 1991, dose values and many radiological measurements are given in both metric and U.S. customary units. A conversion table is provided in the glossary.« less

  17. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  18. WinHPC System | High-Performance Computing | NREL

    Science.gov Websites

    System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB

  19. Lessons Learned from Near Field Modeling and Data Collected at the SPE Chemical Explosions in Jointed Rock Masses

    NASA Astrophysics Data System (ADS)

    Vorobiev, O.; Ezzedine, S. M.; Hurley, R.; Antoun, T.; Glenn, L.

    2016-12-01

    This work describes the near-field modeling of wave propagation from underground chemicalexplosions conducted at the Nevada National Security Site (NNSS) in fractured granitic rock. Lab testsperformed on granite samples excavated from various locations at the SPE site have shown littlevariability in mechanical properties. Granite at this scale can be considered as an isotropic medium. Wehave shown, however, that on the scale of the pressure waves generated during chemical explosions(tens of meters), the effective mechanical properties may vary significantly and exhibit both elastic andplastic anisotropies due to local variations in joint properties such as spacing orientation, joint aperture,cohesion and saturation. Since including every joint in a discrete fashion in computational model is notfeasible, especially for large-scale calculations ( 1.5 km domain), we have developed a computationaltechnique to upscale mechanical properties for various scales (frequencies) using geophysicalcharacterization conducted during recent SPE tests at the NNSS. Stochastic representation of thesefeatures based on the field characterizations has been implemented into LLNL's Geodyn-L hydrocode.Scale dependency in mechanical properties is important in order to understand how the ground motionscales with yield. We hope that such an approach will not only provide a better prediction of theground motion observed in the SPE (where the yield varies from 100 kg to few tons of TNT equivalent)but also will allow us to extrapolate results of the SPE to sources with bigger yields. We have validatedour computational results by comparing the measured and computed ground motion at various rangesfor experiments of various yields (SPE1-SPE5). Using the new model we performed severalcomputational studies to identify the most important mechanical properties of the rock mass specific tothe SPE site and to understand their roles in the observed ground motion in the near-field. We willpresent a series of lessons learned from the data gathered at the NNSS SPE site and the simulationsconducted using state-of-the-art HPC codes.This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-679820

  20. Creating a Parallel Version of VisIt for Microsoft Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlock, B J; Biagas, K S; Rawson, P L

    2011-12-07

    VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less

  1. Are Hypomineralized Primary Molars and Canines Associated with Molar-Incisor Hypomineralization?

    PubMed

    da Silva Figueiredo Sé, Maria Jose; Ribeiro, Ana Paula Dias; Dos Santos-Pinto, Lourdes Aparecida Martins; de Cassia Loiola Cordeiro, Rita; Cabral, Renata Nunes; Leal, Soraya Coelho

    2017-11-01

    The purpose of this study was to evaluate the prevalence of and relationship between hypomineralized second primary molars (HSPM) and hypomineralized primary canines (HPC) with molar-incisor hypomineralization (MIH) in 1,963 schoolchildren. The European Academy of Paediatric Dentistry (EAPD) criterion was used for scoring HSPM/HPC and MIH. Only children with four permanent first molars and eight incisors were considered in calculating MIH prevalence (n equals 858); for HSPM/HPC prevalence, only children with four primary second molars (n equals 1,590) and four primary canines (n equals 1,442) were considered. To evaluate the relationship between MIH/HSPM, only children meeting both criteria cited were considered (n equals 534), as was true of MIH/HPC (n equals 408) and HSPM/HPC (n equals 360; chi-square test and logistic regression). The prevalence of MIH was 14.69 percent (126 of 858 children). For HSPM and HPC, the prevalence was 6.48 percent (103 of 1,592) and 2.22 percent (32 of 1,442), respectively. A significant relationship was observed between MIH and both HSPM/HPC (P<0.001). The odds ratio for MIH based on HSPM was 6.31 (95 percent confidence interval [CI] equals 2.59 to 15.13) and for HPC was 6.02 (95 percent CI equals 1.08 to 33.05). The results led to the conclusion that both hypomineralized second primary molars and hypomineralized primary canines are associated with molar-incisor hypomineralization, because children with HSPM/HPC are six times more likely to develop MIH.

  2. Abstract of talk for Silicon Valley Linux Users Group

    NASA Technical Reports Server (NTRS)

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  3. Evaluation of LLNL BSL-3 Maximum Credible Event Potential Consequence to the General Population and Surrounding Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, M.

    2010-08-16

    The purpose of this evaluation is to establish reproducibility of the analysis and consequence results to the general population and surrounding environment in the LLNL Biosafety Level 3 Facility Environmental Assessment (LLNL 2008).

  4. Natural Language Processing as a Discipline at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Firpo, M A

    The field of Natural Language Processing (NLP) is described as it applies to the needs of LLNL in handling free-text. The state of the practice is outlined with the emphasis placed on two specific aspects of NLP: Information Extraction and Discourse Integration. A brief description is included of the NLP applications currently being used at LLNL. A gap analysis provides a look at where the technology needs work in order to meet the needs of LLNL. Finally, recommendations are made to meet these needs.

  5. Performance of high performance concrete (HPC) in low pH and sulfate environment : [technical summary].

    DOT National Transportation Integrated Search

    2013-01-01

    High-performance concrete (HPC) refers to any concrete formulation with enhanced characteristics, compared to normal concrete. One might think this refers to strength, but in Florida, the HPC standard emphasizes withstanding aggressive environments, ...

  6. Intracranial meningeal hemangiopericytoma: Recurrences at the initial and distant intracranial sites and extraneural metastases to multiple organs

    PubMed Central

    WEI, GUANGQUAN; KANG, XIAOWEI; LIU, XIANPING; TANG, XING; LI, QINLONG; HAN, JUNTAO; YIN, HONG

    2015-01-01

    Regardless of the controversial pathogenesis, intracranial meningeal hemangiopericytoma (M-HPC) is a rare, highly cellular and vascularized mesenchymal tumor that is characterized by a high tendency for recurrence and extraneural metastasis, despite radical excision and postoperative radiotherapy. M-HPC shares similar clinical manifestations and radiological findings with meningioma, which causes difficulty in differentiation of this entity from those prognostically favorable mimics prior to surgery. Treatment of M-HPC, particularly in metastatic settings, remains a challenge. A case is described of primary M-HPC with recurrence at the initial and distant intracranial sites and extraneural multiple-organ metastases in a 36-year-old female. The metastasis of M-HPC was extremely extensive, and to the best of our knowledge this is the first case of M-HPC with delayed metastasis to the bilateral kidneys. The data suggests that preoperative computed tomography and magnetic resonance imaging could provide certain diagnostic clues and useful information for more optimal treatment planning. The results may imply that novel drugs, such as temozolomide and bevacizumab, as a component of multimodality therapy of M-HPC may deserve further investigation. PMID:26171177

  7. High pressure homogenization to improve the stability of casein - hydroxypropyl cellulose aqueous systems.

    PubMed

    Ye, Ran; Harte, Federico

    2014-03-01

    The effect of high pressure homogenization on the improvement of the stability hydroxypropyl cellulose (HPC) and micellar casein was investigated. HPC with two molecular weights (80 and 1150 kDa) and micellar casein were mixed in water to a concentration leading to phase separation (0.45% w/v HPC and 3% w/v casein) and immediately subjected to high pressure homogenization ranging from 0 to 300 MPa, in 100 MPa increments. The various dispersions were evaluated for stability, particle size, turbidity, protein content, and viscosity over a period of two weeks and Scanning Transmission Electron Microscopy (STEM) at the end of the storage period. The stability of casein-HPC complexes was enhanced with the increasing homogenization pressure, especially for the complex containing high molecular weight HPC. The apparent particle size of complexes was reduced from ~200nm to ~130nm when using 300 MPa, corresponding to the sharp decrease of absorbance when compared to the non-homogenized controls. High pressure homogenization reduced the viscosity of HPC-casein complexes regardless of the molecular weight of HPC and STEM imagines revealed aggregates consistent with nano-scale protein polysaccharide interactions.

  8. High pressure homogenization to improve the stability of casein - hydroxypropyl cellulose aqueous systems

    PubMed Central

    Ye, Ran; Harte, Federico

    2013-01-01

    The effect of high pressure homogenization on the improvement of the stability hydroxypropyl cellulose (HPC) and micellar casein was investigated. HPC with two molecular weights (80 and 1150 kDa) and micellar casein were mixed in water to a concentration leading to phase separation (0.45% w/v HPC and 3% w/v casein) and immediately subjected to high pressure homogenization ranging from 0 to 300 MPa, in 100 MPa increments. The various dispersions were evaluated for stability, particle size, turbidity, protein content, and viscosity over a period of two weeks and Scanning Transmission Electron Microscopy (STEM) at the end of the storage period. The stability of casein-HPC complexes was enhanced with the increasing homogenization pressure, especially for the complex containing high molecular weight HPC. The apparent particle size of complexes was reduced from ~200nm to ~130nm when using 300 MPa, corresponding to the sharp decrease of absorbance when compared to the non-homogenized controls. High pressure homogenization reduced the viscosity of HPC-casein complexes regardless of the molecular weight of HPC and STEM imagines revealed aggregates consistent with nano-scale protein polysaccharide interactions. PMID:24159250

  9. An update on ABO incompatible hematopoietic progenitor cell transplantation.

    PubMed

    Staley, Elizabeth M; Schwartz, Joseph; Pham, Huy P

    2016-06-01

    Hematopoietic progenitor cell (HPC) transplantation has long been established as the optimal treatment for many hematologic malignancies. In the setting of allogenic HLA matched HPC transplantation, greater than 50% of unrelated donors and 30% of related donors demonstrate some degree of ABO incompatibility (ABOi), which is classified in one of three ways: major, minor, or bidirectional. Major ABOi refers to the presence of recipient isoagglutinins against the donor's A and/or B antigen. Minor ABOi occurs when the HPC product contains the isoagglutinins targeting the recipient's A and/or B antigen. Bidirectional refers to the presence of both major and minor ABOi. Major adverse events associated with ABOi HPC transplantation includes acute and delayed hemolysis, pure red cell aplasia, and delayed engraftment. ABOi HPC transplantation poses a unique challenge to the clinical transplantation unit, the HPC processing lab, and the transfusion medicine service. Therefore, it is essential that these services actively communicate with one another to ensure patient safety. This review will attempt to globally address the challenges related to ABOi HPC transplantation, with an increased focus on aspects related to the laboratory and transfusion medicine services. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  11. A streamlined build system foundation for developing HPC software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Chris; Harrison, Cyrus; Hornung, Richard

    2017-02-09

    BLT bundles custom CMake macros, unit testing frameworks for C++ and Fortran, and a set of smoke tests for common HPC dependencies. The combination of these three provides a foundation for quickly bootstrapping a CMale-based system for developing HPC softward.

  12. DCL System Using Deep Learning Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    platform (HPC) was developed, called the HPC-Acoustic Data Accelerator, or HPC-ADA for short. The HPC-ADA was designed based on fielded systems [1-4...software (Detection cLassificaiton for MAchine learning - High Peformance Computing). The software package was designed to utilize parallel and...Sedna [7] and is designed using a parallel architecture2, allowing existing algorithms to distribute to the various processing nodes with minimal changes

  13. International Energy Agency's Heat Pump Centre (IEA-HPC) Annual National Team Working Group Meeting

    NASA Astrophysics Data System (ADS)

    Broders, M. A.

    1992-09-01

    The traveler, serving as Delegate from the United States Advanced Heat Pump National Team, participated in the activities of the fourth IEA-HPC National Team Working Group meeting. Highlights of this meeting included review and discussion of 1992 IEA-HPC activities and accomplishments, introduction of the Switzerland National Team, and development of the 1993 IEA-HPC work program. The traveler also gave a formal presentation about the Development and Activities of the IEA Advanced Heat Pump U.S. National Team.

  14. Biological and Chemical Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitch, P J

    2002-12-19

    The LLNL Chemical & Biological National Security Program (CBNP) provides science, technology and integrated systems for chemical and biological security. Our approach is to develop and field advanced strategies that dramatically improve the nation's capabilities to prevent, prepare for, detect, and respond to terrorist use of chemical or biological weapons. Recent events show the importance of civilian defense against terrorism. The 1995 nerve gas attack in Tokyo's subway served to catalyze and focus the early LLNL program on civilian counter terrorism. In the same year, LLNL began CBNP using Laboratory-Directed R&D investments and a focus on biodetection. The Nunn-Lugar-Domenici Defensemore » Against Weapons of Mass Destruction Act, passed in 1996, initiated a number of U.S. nonproliferation and counter-terrorism programs including the DOE (now NNSA) Chemical and Biological Nonproliferation Program (also known as CBNP). In 2002, the Department of Homeland Security was formed. The NNSA CBNP and many of the LLNL CBNP activities are being transferred as the new Department becomes operational. LLNL has a long history in national security including nonproliferation of weapons of mass destruction. In biology, LLNL had a key role in starting and implementing the Human Genome Project and, more recently, the Microbial Genome Program. LLNL has over 1,000 scientists and engineers with relevant expertise in biology, chemistry, decontamination, instrumentation, microtechnologies, atmospheric modeling, and field experimentation. Over 150 LLNL scientists and engineers work full time on chemical and biological national security projects.« less

  15. Different implications of the dorsal and ventral hippocampus on contextual memory retrieval after stress.

    PubMed

    Pierard, C; Dorey, R; Henkous, N; Mons, N; Béracochéa, D

    2017-09-01

    This study assessed the relative contributions of dorsal (dHPC) and ventral (vHPC) hippocampus regions in mediating the rapid effects of an acute stress on contextual memory retrieval. Indeed, we previously showed that an acute stress (3 electric footschocks; 0.9 mA each) delivered 15 min before the 24 h-test inversed the memory retrieval pattern in a contextual discrimination task. Specifically, mice learned in a four-hole board two successive discriminations (D1 and D2) varying by the color and texture of the floor. Twenty-four hours later, nonstressed animals remembered accurately D1 but not D2 whereas stressed mice showed an opposite memory retrieval pattern, D2 being more accurately remembered than D1. We showed here that, at the time of memory testing in that task, stressed animals exhibited no significant changes neither in pCREB activity nor in the time-course evolution of corticosterone into the vHPC; in contrast, a significant decrease in pCREB activity and a significant increase in corticosterone were observed in the dHPC as compared to nonstressed mice. Moreover, local infusion of the anesthetic lidocaine into the vHPC 15 min before the onset of the stressor did not modify the memory retrieval pattern in nonstress and stress conditions whereas lidocaine infusion into the dHPC induced in nonstressed mice an memory retrieval pattern similar to that observed in stressed animals. The overall set of data shows that memory retrieval in nonstress condition involved primarily the dHPC and that the inversion of memory retrieval pattern after stress is linked to a dHPC but not vHPC dysfunction. © 2017 Wiley Periodicals, Inc.

  16. PGen: large-scale genomic variations analysis workflow and browser in SoyKB.

    PubMed

    Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti

    2016-10-06

    With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most efficient analysis of soybean data using thorough testing and validation. This research serves as an example of best practices for development of genomics data analysis workflows by integrating remote HPC resources and efficient data management with ease of use for biological users. PGen workflow can also be easily customized for analysis of data in other species.

  17. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    NASA Astrophysics Data System (ADS)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the NASA GLC Viewer discovery and analysis tool, the DigitalGlobe/NGA Data Discovery Tool, the NASA Disaster Response Group Mapping Platform (https://maps.disasters.nasa.gov), and support for NASA's Arctic - Boreal Vulnerability Experiment (ABoVE).

  18. [Diagnostic value of STAT6 immunohistochemistry in solitary fibrous tumor/meningeal hemangiopericytoma].

    PubMed

    Zhang, Xialing; Cheng, Haixia; Bao, Yun; Tang, Feng; Wang, Yin

    2016-02-01

    To investigate the diagnostic role of STAT6 immunohistochemistry in solitary fibrous tumors (SFT)/meningeal hemangiopericytomas (HPC). Evaluated the expression of STAT6, vimentin, CD34, EMA, PR, S-100, CD56, GFAP and Ki-67 in a cohort of 37 SFT/meningeal HPC, 30 meningiomas and 30 schwannomas by immunohistochemistry staining. All SFT/meningeal HPC demonstrated nuclear positivity for STAT6, and the proportion of positive tumor cells ranged from 60% to 95%, with no significant difference cases.Vimentin was strongly positive in all cases. CD34, EMA and PR positivity was found in 32 cases, 1 case and 4 cases, respectively.S-100 protein, CD56 and GFAP were negative; Ki-67 labeling index was 1%-8%. However, the meningiomas and schwannomas were negative for STAT6. STAT6 is a relatively specific biomarker for SFT/meningeal HPC, and may be used in the diagnosis and differential diagnosis of SFT/meningeal HPC, especially for the atypical cases, and allows the precise pathologic diagnosis of SFT/meningeal HPC.

  19. Characteristic endobronchial ultrasound image of hemangiopericytoma/solitary fibrous tumor.

    PubMed

    Chen, Fengshi; Yoshizawa, Akihiko; Okubo, Kenichi; Date, Hiroshi

    2010-09-01

    Hemangiopericytomatous pattern is characteristic of hemangiopericytoma/solitary fibrous tumor (HPC/SFT) and certain histological features might indicate a malignant potential, but the behavior of HPC/SFT is unpredictable. Endobronchial ultrasound (EBUS) is a useful diagnostic device in that the ultrasonographic image can be viewed and the EBUS-transbronchial needle aspiration can obtain a biopsied sample. We herein report a patient undergoing multiple surgical resections of recurrent HPC/SFT. A 74-year-old man had undergone right upper lobectomy for HPC/SFT 15 years ago. He received a partial resection of the left lung and a resection of the anterior mediastinal mass for its recurrences 13 years and six years ago, respectively. He had also undergone surgery for gastric carcinoma two years ago. He then presented with a tumor measuring 3 x 4 cm in the subcarinal area. Preoperative EBUS revealed a tumor with abundant thin-walled vessel-like structures, which was consistent with HPC/SFT. The tumor was completely resected and was finally diagnosed as low-grade malignant HPC/SFT.

  20. Biomass Waste Inspired Highly Porous Carbon for High Performance Lithium/Sulfur Batteries

    PubMed Central

    Zhao, Yan; Ren, Jun; Tan, Taizhe; Babaa, Moulay-Rachid; Bakenov, Zhumabay; Liu, Ning; Zhang, Yongguang

    2017-01-01

    The synthesis of highly porous carbon (HPC) materials from poplar catkin by KOH chemical activation and hydrothermal carbonization as a conductive additive to a lithium-sulfur cathode is reported. Elemental sulfur was composited with as-prepared HPC through a melt diffusion method to form a S/HPC nanocomposite. Structure and morphology characterization revealed a hierarchically sponge-like structure of HPC with high pore volume (0.62 cm3∙g−1) and large specific surface area (1261.7 m2∙g−1). When tested in Li/S batteries, the resulting compound demonstrated excellent cycling stability, delivering a second-specific capacity of 1154 mAh∙g−1 as well as presenting 74% retention of value after 100 cycles at 0.1 C. Therefore, the porous structure of HPC plays an important role in enhancing electrochemical properties, which provides conditions for effective charge transfer and effective trapping of soluble polysulfide intermediates, and remarkably improves the electrochemical performance of S/HPC composite cathodes. PMID:28878149

  1. Anterior hippocampal dysconnectivity in posttraumatic stress disorder: a dimensional and multimodal approach.

    PubMed

    Abdallah, C G; Wrocklage, K M; Averill, C L; Akiki, T; Schweinsburg, B; Roy, A; Martini, B; Southwick, S M; Krystal, J H; Scott, J C

    2017-02-28

    The anterior hippocampus (aHPC) has a central role in the regulation of anxiety-related behavior, stress response, emotional memory and fear. However, little is known about the presence and extent of aHPC abnormalities in posttraumatic stress disorder (PTSD). In this study, we used a multimodal approach, along with graph-based measures of global brain connectivity (GBC) termed functional GBC with global signal regression (f-GBCr) and diffusion GBC (d-GBC), in combat-exposed US Veterans with and without PTSD. Seed-based aHPC anatomical connectivity analyses were also performed. A whole-brain voxel-wise data-driven investigation revealed a significant association between elevated PTSD symptoms and reduced medial temporal f-GBCr, particularly in the aHPC. Similarly, aHPC d-GBC negatively correlated with PTSD severity. Both functional and anatomical aHPC dysconnectivity measures remained significant after controlling for hippocampal volume, age, gender, intelligence, education, combat severity, depression, anxiety, medication status, traumatic brain injury and alcohol/substance comorbidities. Depression-like PTSD dimensions were associated with reduced connectivity in the ventromedial and dorsolateral prefrontal cortex. In contrast, hyperarousal symptoms were positively correlated with ventromedial and dorsolateral prefrontal connectivity. We believe the findings provide first evidence of functional and anatomical dysconnectivity in the aHPC of veterans with high PTSD symptomatology. The data support the putative utility of aHPC connectivity as a measure of overall PTSD severity. Moreover, prefrontal global connectivity may be of clinical value as a brain biomarker to potentially distinguish between PTSD subgroups.

  2. Hepatic hemangiopericytoma/solitary fibrous tumor: a review of our current understanding and case study.

    PubMed

    Bokshan, Steven L; Doyle, Majella; Becker, Nils; Nalbantoglu, Ilke; Chapman, William C

    2012-11-01

    In 2002, the World Health Organization reclassified the soft tissue tumors known as hemangiopericytoma (HPC) as a variant of solitary fibrous tumor (SFT). As this classification system is still debated and has not been universally applied, the following account will provide an updated review of our understanding of those tumors still classified as HPC in the literature with special emphasis on hepatic HPC/SFT. HPC is a soft tissue neoplasm of mesenchymal origin first described by Stout and Murray in 1942. HPC constitutes 1 % of all vascular neoplasms and has been thought to coexist with trauma, prolonged steroid use, and hypertension. Although its presentation may be variable, intrahepatic HPC often presents with the patient's increasing awareness of a painless mass. Marked hypoglycemia may also accompany the neoplasm. Recent evidence suggests that uncontrolled growth may result from a loss of imprinting with overproduction of IGF-II in addition to alternative promoter usage. Diagnostic modalities including imaging, biopsy, and biochemical assays may be used to detect the presence of HPC. As most lesions are benign and slow growing, the prognosis is relatively favorable with 10-year survival between 54 and 70 %. Current mainstays of treatment include hepatic resection when possible especially with the use of adjuvant radiotherapy. Chemotherapeutic approaches have been poorly studied and are generally reserved for inoperable cases. Antiangiogenic compounds such as temozolomide and bevacizumab provide an exciting avenue of treatment. Finally, a case study will be reviewed highlighting diagnosis, treatment, and spectrum nature of hepatic HPC.

  3. Evaluation of the Dosimetric Feasibility of Hippocampal Sparing Intensity-Modulated Radiotherapy in Patients with Locally Advanced Nasopharyngeal Carcinoma

    PubMed Central

    Gan, Hua; Denniston, Kyle A.; Li, Sicong; Tan, Wenyong; Wang, Zhaohua

    2014-01-01

    Purpose The objective of this study was to evaluate the dosimetric feasibility of using hippocampus (HPC) sparing intensity-modulated radiotherapy (IMRT) in patients with locally advanced nasopharyngeal carcinoma (NPC). Materials/Methods Eight cases of either T3 or T4 NPC were selected for this study. Standard IMRT treatment plans were constructed using the volume and dose constraints for the targets and organs at risk (OAR) per Radiation Therapy Oncology Group (RTOG) 0615 protocol. Experimental plans were constructed using the same criteria, with the addition of the HPC as an OAR. The two dose-volume histograms for each case were compared for the targets and OARs. Results All plans achieved the protocol dose criteria. The homogeneity index, conformity index, and coverage index for the planning target volumes (PTVs) were not significantly compromised by the avoidance of the HPC. The doses to all OARs, excluding the HPC, were similar. Both the dose (Dmax, D2%, D40%, Dmean, Dmedian, D98% and Dmin) and volume (V5, V10, V15, V20, V30, V40 and V50) parameters for the HPC were significantly lower in the HPC sparing plans (p<0.05), except for Dmin (P = 0.06) and V5 (P = 0.12). Conclusions IMRT for patients with locally advanced NPC exposes the HPC to a significant radiation dose. HPC sparing IMRT planning significantly decreases this dose, with minimal impact on the therapeutic targets and other OARs. PMID:24587184

  4. The influence of rapamycin on the early cardioprotective effect of hypoxic preconditioning on cardiomyocytes

    PubMed Central

    Wang, Jiang; Maimaitili, YiLiyaer; Yu, Jin; Guo, Hai; Ma, Hai-Ping; Chen, Chun-ling

    2016-01-01

    Introduction The purpose of this study was to examine the effects of rapamycin on the cardioprotective effect of hypoxic preconditioning (HPC) and on the mammalian target of rapamycin (mTOR)-mediated hypoxia-inducible factor 1 (HIF-1) signaling pathway. Material and methods Primary cardiomyocytes were isolated from rat pups and underwent rapamycin and/or HPC, followed by hypoxia/re-oxygenation (H/R) injury. Cell viability and cell injury were determined by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lactate dehydrogenase (LDH) assays, and qRT-PCR was used to measure HIF-1α and mTOR mRNA expression. A Langendorff heart perfusion model was conducted to observe the effect of rapamycin. Results Rapamycin treatment nearly abolished the cardioprotective effect of HPC in cardiomyocytes, reduced cell viability (p = 0.007) and increased cell damage (p = 0.032). HIF-1α and mTOR mRNA expression increased in cardiomyocytes undergoing I/R injury within 2 h after HPC. After rapamycin treatment, mTOR mRNA expression and HPC-induced HIF-1α mRNA expression were both reduced (p < 0.001). A Langendorff heart perfusion model in rat hearts showed that rapamycin greatly attenuated the cardioprotective effect of HPC in terms of heart rate, LVDP, and dp/dtmax (all, p < 0.029). Conclusions Rapamycin, through inhibition of mTOR, reduces the elevated HIF-1α expression at an early stage of HPC, and attenuates the early cardioprotective effect of HPC. PMID:28721162

  5. Members of the Cyr61/CTGF/NOV Protein Family: Emerging Players in Hepatic Progenitor Cell Activation and Intrahepatic Cholangiocarcinoma

    PubMed Central

    Jorgensen, Marda; Song, Joanna; Zhou, Junmei; Liu, Chen

    2016-01-01

    Hepatic stem/progenitor cells (HPC) reside quiescently in normal biliary trees and are activated in the form of ductular reactions during severe liver damage when the replicative ability of hepatocytes is inhibited. HPC niches are full of profibrotic stimuli favoring scarring and hepatocarcinogenesis. The Cyr61/CTGF/NOV (CCN) protein family consists of six members, CCN1/CYR61, CCN2/CTGF, CCN3/NOV, CCN4/WISP1, CCN5/WISP2, and CCN6/WISP3, which function as extracellular signaling modulators to mediate cell-matrix interaction during angiogenesis, wound healing, fibrosis, and tumorigenesis. This study investigated expression patterns of CCN proteins in HPC and cholangiocarcinoma (CCA). Mouse HPC were induced by the biliary toxin 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC). Differential expression patterns of CCN proteins were found in HPC from DDC damaged mice and in human CCA tumors. In addition, we utilized reporter mice that carried Ccn2/Ctgf promoter driven GFP and detected strong Ccn2/Ctgf expression in epithelial cell adhesion molecule (EpCAM)+ HPC under normal conditions and in DDC-induced liver damage. Abundant CCN2/CTGF protein was also found in cytokeratin 19 (CK19)+ human HPC that were surrounded by α-smooth muscle actin (α-SMA)+ myofibroblast cells in intrahepatic CCA tumors. These results suggest that CCN proteins, particularly CCN2/CTGF, function in HPC activation and CCA development. PMID:27829832

  6. IGPP-LLNL 1998 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryerson, F J; Cook, K H; Tweed, J

    1999-11-19

    The Institute of Geophysics and Planetary Physics (IGPP) is a Multicampus Research Unit of the University of California (UC). IGPP was founded in 1946 at UC Los Angeles with a charter to further research in the earth and planetary sciences and related fields. The Institute now has branches at UC campuses in Los Angeles, San Diego, and Riverside, and at Los Alamos and Lawrence Livermore national laboratories. The University-wide IGPP has played an important role in establishing interdisciplinary research in the earth and planetary sciences. For example, IGPP was instrumental in founding the fields of physical oceanography and space physics,more » which at the time fell between the cracks of established university departments. Because of its multicampus orientation, IGPP has sponsored important interinstitutional consortia in the earth and planetary sciences. Each of the five branches has a somewhat different intellectual emphasis as a result of the interplay between strengths of campus departments and Laboratory programs. The IGPP branch at Lawrence Livermore National Laboratory (LLNL) was approved by the Regents of the University of California in 1982. IGPP-LLNL emphasizes research in tectonics, geochemistry, and astrophysics. It provides a venue for studying the fundamental aspects of these fields, thereby complementing LLNL programs that pursue applications of these disciplines in national security and energy research. IGPP-LLNL is directed by Charles Alcock and was originally organized into three centers: Geosciences, stressing seismology; High-Pressure Physics, stressing experiments using the two-stage light-gas gun at LLNL; and Astrophysics, stressing theoretical and computational astrophysics. In 1994, the activities of the Center for High-Pressure Physics were merged with those of the Center for Geosciences. The Center for Geosciences, headed by Frederick Ryerson, focuses on research in geophysics and geochemistry. The Astrophysics Research Center, headed by Kem Cook, provides a home for theoretical and observational astrophysics and serves as an interface with the Physics Directorate's astrophysics efforts. The IGPP branch at LLNL (as well as the branch at Los Alamos) also facilitates scientific collaborations between researchers at the UC campuses and those at the national laboratories in areas related to earth science, planetary science, and astrophysics. It does this by sponsoring the University Collaborative Research Program (UCRP), which provides funds to UC campus scientists for joint research projects with LLNL. Additional information regarding IGPP-LLNL projects and people may be found at http://wwwigpp.llnl.gov/. The goals of the UCRP are to enrich research opportunities for UC campus scientists by making available to them some of LLNL's unique facilities and expertise, and to broaden the scientific program at LLNL through collaborative or interdisciplinary work with UC campus researchers. UCRP funds (provided jointly by the Regents of the University of California and by the Director of LLNL) are awarded annually on the basis of brief proposals, which are reviewed by a committee of scientists from UC campuses, LLNL programs, and external universities and research organizations. Typical annual funding for a collaborative research project ranges from $5,000 to $30,000. Funds are used for a variety of purposes, such as salary support for UC graduate students, postdoctoral fellows, and faculty; and costs for experimental facilities. A statistical overview of IGPP-LLNL's UCRP (colloquially known as the mini-grant program) is presented in Figures 1 and 2. Figure 1 shows the distribution of UCRP awards among the UC campuses, by total amount awarded and by number of proposals funded. Figure 2 shows the distribution of awards by center.« less

  7. Building CHAOS: An Operating System for Livermore Linux Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less

  8. Development of an Autonomous Navigation Technology Test Vehicle

    DTIC Science & Technology

    2004-08-01

    as an independent thread on processors using the Linux operating system. The computer hardware selected for the nodes that host the MRS threads...communications system design. Linux was chosen as the operating system for all of the single board computers used on the Mule. Linux was specifically...used for system analysis and development. The simple realization of multi-thread processing and inter-process communications in Linux made it a

  9. HPC: Rent or Buy

    ERIC Educational Resources Information Center

    Fredette, Michelle

    2012-01-01

    "Rent or buy?" is a question people ask about everything from housing to textbooks. It is also a question universities must consider when it comes to high-performance computing (HPC). With the advent of Amazon's Elastic Compute Cloud (EC2), Microsoft Windows HPC Server, Rackspace's OpenStack, and other cloud-based services, researchers now have…

  10. High Performance Concrete in Washington State SR 18/SR 516 Overcrossing: Interim Report on Girder Monitoring

    DOT National Transportation Integrated Search

    2000-04-01

    In the mid 1990s the Federal Highway Administration (FHWA) established a High Performance Concrete (HPC) program aimed at demonstrating the positive effects of utilizing HPC in bridges. Research on the benefits of using HPC for bridges has shown a nu...

  11. High performance concrete in Washington state SR 18/SR 516 overcrossing : interim report on materials tests

    DOT National Transportation Integrated Search

    2000-04-01

    In the mid 1990s the Federal Highway Administration (FHWA) established a High Performance Concrete (HPC) program aimed at demonstrating the positive effects of utilizing HPC in bridges. Research on the benefits of using HPC for bridges has shown a nu...

  12. Supporting the volunteer career of male hospice-palliative care volunteers.

    PubMed

    Weeks, Lori E; MacQuarrie, Colleen

    2011-08-01

    We invited men to discuss their volunteer careers with hospice-palliative care (HPC) to better understand how to recruit and train, retain and support, and then successfully end their volunteer experience. Nine male current or former HPC volunteers participated in face-to-face interviews which were transcribed and analyzed. The men described a complex interplay of individual characteristics with the unique roles available to HPC volunteers. The men's recruitment experiences coalesced around both individually based and organizationally based themes. Results pertaining to retention revealed the interchange between their personalities, the perks and pitfalls of the unique experiences of an HPC volunteer, and the value of the organization's support for these volunteers. Our interpretation of these experiences can help HPC organizations enhance their recruitment, retention, and support of male volunteers.

  13. Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving

    NASA Astrophysics Data System (ADS)

    Demenev, A. G.

    2018-02-01

    The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.

  14. ICP-MS Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carman, April J.; Eiden, Gregory C.

    2014-11-01

    This is a short document that explains the materials that will be transmitted to LLNL and DNN HQ regarding the ICP-MS Workshop held at PNNL June 17-19th. The goal of the information is to pass on to LLNL information regarding the planning and preparations for the Workshop at PNNL in preparation of the SIMS workshop at LLNL.

  15. Integrating a Trusted Computing Base Extension Server and Secure Session Server into the LINUX Operating System

    DTIC Science & Technology

    2001-09-01

    Readily Available Linux has been copyrighted under the terms of the GNU General Public 5 License (GPL)1. This is a license written by the Free...GNOME and KDE . d. Portability Linux is highly compatible with many common operating systems. For...using suitable libraries, Linux is able to run programs written for other operating systems. [Ref. 8] 1 The GNU Project is coordinated by the

  16. [Lessening effect of hypoxia-preconditioned rat cerebrospinal fluid on oxygen-glucose deprivation-induced injury of cultured hippocampal neurons in neonate rats and possible mechanism].

    PubMed

    Niu, Jing-Zhong; Zhang, Yan-Bo; Li, Mei-Yi; Liu, Li-Li

    2011-12-25

    The present study was to investigate the effect of cerebrospinal fluid (CSF) from the rats with hypoxic preconditioning (HPC) on apoptosis of cultured hippocampal neurons in neonate rats under oxygen glucose deprivation (OGD). Adult Wistar rats were exposed to 3 h of hypoxia for HPC, and then their CSF was taken out. Cultured hippocampal neurons from the neonate rats were randomly divided into four groups (n = 6): normal control group, OGD group, normal CSF group and HPC CSF group. OGD group received 1.5 h of incubation in glucose-free Earle's solution containing 1 mmol/L Na2S2O4, and normal and HPC CSF groups were subjected to 1 d of corresponding CSF treatments followed by 1.5 h OGD. The apoptosis of neurons was analyzed by confocal laser scanning microscope and flow cytometry using Annexin V/PI double staining. Moreover, protein expressions of Bcl-2 and Bax were detected by immunofluorescence. The results showed that few apoptotic cells were observed in normal control group, whereas the number of apoptotic cells was greatly increased in OGD group. Both normal and HPC CSF could decrease the apoptosis of cultured hippocampal neurons injured by OGD (P < 0.01). Notably, the protective effect of HPC CSF was stronger than that of normal one (P < 0.01). Compared to OGD group, normal and HPC CSF groups both showed significantly higher levels of Bcl-2 (P < 0.01), and Bcl-2 expression level in HPC CSF group was even higher than that in normal CSF group (P < 0.01). Whereas the expressions of Bax in normal and HPC CSF groups were significantly lower than that in OGD group (P < 0.01), and the Bax expression in HPC CSF group was even lower than that in normal CSF group (P < 0.01). These results suggest that CSF from hypoxic-preconditioned rats could degrade apoptotic rate of OGD-injured hippocampal neurons by up-regulating expression of Bcl-2 and down-regulating expression of Bax.

  17. Review of LLNL Mixed Waste Streams for the Application of Potential Waste Reduction Controls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belue, A; Fischer, R P

    2007-01-08

    In July 2004, LLNL adopted the International Standard ISO 14001 as a Work Smart Standard in lieu of DOE Order 450.1. In support of this new requirement the Director issued a new environmental policy that was documented in Section 3.0 of Document 1.2, ''ES&H Policies of LLNL'', in the ES&H Manual. In recent years the Environmental Management System (EMS) process has become formalized as LLNL adopted ISO 14001 as part of the contract under which the laboratory is operated for the Department of Energy (DOE). On May 9, 2005, LLNL revised its Integrated Safety Management System Description to enhance existingmore » environmental requirements to meet ISO 14001. Effective October 1, 2005, each new project or activity is required to be evaluated from an environmental aspect, particularly if a potential exists for significant environmental impacts. Authorizing organizations are required to consider the management of all environmental aspects, the applicable regulatory requirements, and reasonable actions that can be taken to reduce negative environmental impacts. During 2006, LLNL has worked to implement the corrective actions addressing the deficiencies identified in the DOE/LSO audit. LLNL has begun to update the present EMS to meet the requirements of ISO 14001:2004. The EMS commits LLNL--and each employee--to responsible stewardship of all the environmental resources in our care. The generation of mixed radioactive waste was identified as a significant environmental aspect. Mixed waste for the purposes of this report is defined as waste materials containing both hazardous chemical and radioactive constituents. Significant environmental aspects require that an Environmental Management Plan (EMP) be developed. The objective of the EMP developed for mixed waste (EMP-005) is to evaluate options for reducing the amount of mixed waste generated. This document presents the findings of the evaluation of mixed waste generated at LLNL and a proposed plan for reduction.« less

  18. Hemodynamic responses in amygdala and hippocampus distinguish between aversive and neutral cues during Pavlovian fear conditioning in behaving rats

    PubMed Central

    McHugh, Stephen B; Marques-Smith, Andre; Li, Jennifer; Rawlins, J N P; Lowry, John; Conway, Michael; Gilmour, Gary; Tricklebank, Mark; Bannerman, David M

    2013-01-01

    Lesion and electrophysiological studies in rodents have identified the amygdala and hippocampus (HPC) as key structures for Pavlovian fear conditioning, but human functional neuroimaging studies have not consistently found activation of these structures. This could be because hemodynamic responses cannot detect the sparse neuronal activity proposed to underlie conditioned fear. Alternatively, differences in experimental design or fear levels could account for the discrepant findings between rodents and humans. To help distinguish between these alternatives, we used tissue oxygen amperometry to record hemodynamic responses from the basolateral amygdala (BLA), dorsal HPC (dHPC) and ventral HPC (vHPC) in freely-moving rats during the acquisition and extinction of conditioned fear. To enable specific comparison with human studies we used a discriminative paradigm, with one auditory cue [conditioned stimulus (CS)+] that was always followed by footshock, and another auditory cue (CS−) that was never followed by footshock. BLA tissue oxygen signals were significantly higher during CS+ than CS− trials during training and early extinction. In contrast, they were lower during CS+ than CS− trials by the end of extinction. dHPC and vHPC tissue oxygen signals were significantly lower during CS+ than CS− trials throughout extinction. Thus, hemodynamic signals in the amygdala and HPC can detect the different patterns of neuronal activity evoked by threatening vs. neutral stimuli during fear conditioning. Discrepant neuroimaging findings may be due to differences in experimental design and/or fear levels evoked in participants. Our methodology offers a way to improve translation between rodent models and human neuroimaging. PMID:23173719

  19. Criticality Safety Evaluation of the LLNL Inherently Safe Subcritical Assembly (ISSA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percher, Catherine

    2012-06-19

    The LLNL Nuclear Criticality Safety Division has developed a training center to illustrate criticality safety and reactor physics concepts through hands-on experimental training. The experimental assembly, the Inherently Safe Subcritical Assembly (ISSA), uses surplus highly enriched research reactor fuel configured in a water tank. The training activities will be conducted by LLNL following the requirements of an Integration Work Sheet (IWS) and associated Safety Plan. Students will be allowed to handle the fissile material under the supervision of LLNL instructors. This report provides the technical criticality safety basis for instructional operations with the ISSA experimental assembly.

  20. Training | High-Performance Computing | NREL

    Science.gov Websites

    Training Training Find training resources for using NREL's high-performance computing (HPC) systems as well as related online tutorials. Upcoming Training HPC User Workshop - June 12th We will be Conference, a group meets to discuss Best Practices in HPC Training. This group developed a list of resources

  1. Using Performance Tools to Support Experiments in HPC Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian

    2014-01-01

    The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less

  2. Site 300 Spill Prevention, Control, and Countermeasures (SPCC) Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, D.; Mertesdorf, E.

    This Spill Prevention, Control, and Countermeasure (SPCC) Plan describes the measures that are taken at Lawrence Livermore National Laboratory’s (LLNL) Experimental Test Site (Site 300) near Tracy, California, to prevent, control, and handle potential spills from aboveground containers that can contain 55 gallons or more of oil. This SPCC Plan complies with the Oil Pollution Prevention regulation in Title 40 of the Code of Federal Regulations, Part 112 (40 CFR 112) and with 40 CFR 761.65(b) and (c), which regulates the temporary storage of polychlorinated biphenyls (PCBs). This Plan has also been prepared in accordance with Division 20, Chapter 6.67more » of the California Health and Safety Code (HSC 6.67) requirements for oil pollution prevention (referred to as the Aboveground Petroleum Storage Act [APSA]), and the United States Department of Energy (DOE) Order No. 436.1. This SPCC Plan establishes procedures, methods, equipment, and other requirements to prevent the discharge of oil into or upon the navigable waters of the United States or adjoining shorelines for aboveground oil storage and use at Site 300. This SPCC Plan has been prepared for the entire Site 300 facility and replaces the three previous plans prepared for Site 300: LLNL SPCC for Electrical Substations Near Buildings 846 and 865 (LLNL 2015), LLNL SPCC for Building 883 (LLNL 2015), and LLNL SPCC for Building 801 (LLNL 2014).« less

  3. Site 300 SPCC Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, D.

    This Spill Prevention, Control, and Countermeasure (SPCC) Plan describes the measures that are taken at Lawrence Livermore National Laboratory’s (LLNL) Experimental Test Site (Site 300) near Tracy, California, to prevent, control, and handle potential spills from aboveground containers that can contain 55 gallons or more of oil. This SPCC Plan complies with the Oil Pollution Prevention regulation in Title 40 of the Code of Federal Regulations, Part 112 (40 CFR 112) and with 40 CFR 761.65(b) and (c), which regulates the temporary storage of polychlorinated biphenyls (PCBs). This Plan has also been prepared in accordance with Division 20, Chapter 6.67more » of the California Health and Safety Code (HSC 6.67) requirements for oil pollution prevention (referred to as the Aboveground Petroleum Storage Act [APSA]), and the United States Department of Energy (DOE) Order No. 436.1. This SPCC Plan establishes procedures, methods, equipment, and other requirements to prevent the discharge of oil into or upon the navigable waters of the United States or adjoining shorelines for aboveground oil storage and use at Site 300. This SPCC Plan has been prepared for the entire Site 300 facility and replaces the three previous plans prepared for Site 300: LLNL SPCC for Electrical Substations Near Buildings 846 and 865 (LLNL 2015), LLNL SPCC for Building 883 (LLNL 2015), and LLNL SPCC for Building 801 (LLNL 2014).« less

  4. Hemangiopericytoma in the lateral ventricle.

    PubMed

    Suzuki, Sakiko; Wanifuchi, Hiroshi; Shimizu, Takashi; Kubo, Osami

    2009-11-01

    A 31-year-old female presented with a particularly rare hemangiopericytoma (HPC) in the right lateral ventricle manifesting as a 6-month history of visual disturbance and headache. Left hemianopsia and choked disc were identified by an ophthalmologist who referred her to us. Magnetic resonance imaging demonstrated a 5-cm homogeneously enhanced mass in the trigone of the right lateral ventricle. The tumor was totally removed by two stage surgery. The histological findings were consistent with HPC. HPC is very important to differentiate from meningioma and solitary fibrous tumors because HPC is more aggressive. The histological and immunochemical findings are important for the differential diagnosis. The present case showed no local recurrence or metastasis without radiation therapy for 4 years, indicating that radiation therapy is not absolutely imperative for patients with intraventricular HPC showing low MIB-1 staining index after total removal.

  5. Bidirectional Control of Anxiety-Related Behaviors in Mice: Role of Inputs Arising from the Ventral Hippocampus to the Lateral Septum and Medial Prefrontal Cortex.

    PubMed

    Parfitt, Gustavo Morrone; Nguyen, Robin; Bang, Jee Yoon; Aqrabawi, Afif J; Tran, Matthew M; Seo, D Kanghoon; Richards, Blake A; Kim, Jun Chul

    2017-07-01

    Anxiety is an adaptive response to potentially threatening situations. Exaggerated and uncontrolled anxiety responses become maladaptive and lead to anxiety disorders. Anxiety is shaped by a network of forebrain structures, including the hippocampus, septum, and prefrontal cortex. In particular, neural inputs arising from the ventral hippocampus (vHPC) to the lateral septum (LS) and medial prefrontal cortex (mPFC) are thought to serve as principal components of the anxiety circuit. However, the role of vHPC-to-LS and vHPC-to-mPFC signals in anxiety is unclear, as no study has directly compared their behavioral contribution at circuit level. We targeted LS-projecting vHPC cells and mPFC-projecting vHPC cells by injecting the retrogradely propagating canine adenovirus encoding Cre recombinase into the LS or mPFC, and injecting a Cre-responsive AAV (AAV8-hSyn-FLEX-hM3D or hM4D) into the vHPC. Consequences of manipulating these neurons were examined in well-established tests of anxiety. Chemogenetic manipulation of LS-projecting vHPC cells led to bidirectional changes in anxiety: activation of LS-projecting vHPC cells decreased anxiety whereas inhibition of these cells produced opposite anxiety-promoting effects. The observed anxiety-reducing function of LS-projecting cells was in contrast with the function of mPFC-projecting cells, which promoted anxiety. In addition, double retrograde tracing demonstrated that LS- and mPFC-projecting cells represent two largely anatomically distinct cell groups. Altogether, our findings suggest that the vHPC houses discrete populations of cells that either promote or suppress anxiety through differences in their projection targets. Disruption of the intricate balance in the activity of these two neuron populations may drive inappropriate behavioral responses seen in anxiety disorders.

  6. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  7. Meningeal hemangiopericytoma and solitary fibrous tumors carry the NAB2-STAT6 fusion and can be diagnosed by nuclear expression of STAT6 protein.

    PubMed

    Schweizer, Leonille; Koelsche, Christian; Sahm, Felix; Piro, Rosario M; Capper, David; Reuss, David E; Pusch, Stefan; Habel, Antje; Meyer, Jochen; Göck, Tanja; Jones, David T W; Mawrin, Christian; Schittenhelm, Jens; Becker, Albert; Heim, Stephanie; Simon, Matthias; Herold-Mende, Christel; Mechtersheimer, Gunhild; Paulus, Werner; König, Rainer; Wiestler, Otmar D; Pfister, Stefan M; von Deimling, Andreas

    2013-05-01

    Non-central nervous system hemangiopericytoma (HPC) and solitary fibrous tumor (SFT) are considered by pathologists as two variants of a single tumor entity now subsumed under the entity SFT. Recent detection of frequent NAB2-STAT6 fusions in both, HPC and SFT, provided additional support for this view. On the other hand, current neuropathological practice still distinguishes between HPC and SFT. The present study set out to identify genes involved in the formation of meningeal HPC. We performed exome sequencing and detected the NAB2-STAT6 fusion in DNA of 8/10 meningeal HPC thereby providing evidence of close relationship of these tumors with peripheral SFT. Due to the considerable effort required for exome sequencing, we sought to explore surrogate markers for the NAB2-STAT6 fusion protein. We adopted the Duolink proximity ligation assay and demonstrated the presence of NAB2-STAT6 fusion protein in 17/17 HPC and the absence in 15/15 meningiomas. More practical, presence of the NAB2-STAT6 fusion protein resulted in a strong nuclear signal in STAT6 immunohistochemistry. The nuclear reallocation of STAT6 was detected in 35/37 meningeal HPC and 25/25 meningeal SFT but not in 87 meningiomas representing the most important differential diagnosis. Tissues not harboring the NAB2-STAT6 fusion protein presented with nuclear expression of NAB2 and cytoplasmic expression of STAT6 proteins. In conclusion, we provide strong evidence for meningeal HPC and SFT to constitute variants of a single entity which is defined by NAB2-STAT6 fusion. In addition, we demonstrate that this fusion can be rapidly detected by STAT6 immunohistochemistry which shows a consistent nuclear reallocation. This immunohistochemical assay may prove valuable for the differentiation of HPC and SFT from other mesenchymal neoplasms.

  8. ARC-2010-ACD10-0020-034

    NASA Image and Video Library

    2010-02-10

    Lawrence Livermore National Labs (LLNL), Navistar and the Department of Energy conduct tests in the NASA Ames National Full-scale Aerodynamic Complex 80x120_foot wind tunnel. The LLNL project is aimed at aerodynamic truck and trailer devices that can reduce fuel consumption at highway speed by 10 percent. LLNL's test piece is being installed on truck.

  9. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    NASA Astrophysics Data System (ADS)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  10. ATLAS computing on CSCS HPC

    NASA Astrophysics Data System (ADS)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  11. UPLC-QTOFMS-based metabolomic analysis of the serum of hypoxic preconditioning mice

    PubMed Central

    Liu, Jie; Zhang, Gang; Chen, Dewei; Chen, Jian; Yuan, Zhi-Bin; Zhang, Er-Long; Gao, Yi-Xing; Xu, Gang; Sun, Bing-Da; Liao, Wenting; Gao, Yu-Qi

    2017-01-01

    Hypoxic preconditioning (HPC) is well-known to exert a protective effect against hypoxic injury; however, the underlying molecular mechanism remains unclear. The present study utilized a serum metabolomics approach to detect the alterations associated with HPC. In the present study, an animal model of HPC was established by exposing adult BALB/c mice to acute repetitive hypoxia four times. The serum samples were collected by orbital blood sampling. Metabolite profiling was performed using ultra-performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOFMS), in conjunction with univariate and multivariate statistical analyses. The results of the present study confirmed that the HPC mouse model was established and refined, suggesting significant differences between the control and HPC groups at the molecular levels. HPC caused significant metabolic alterations, as represented by the significant upregulation of valine, methionine, tyrosine, isoleucine, phenylalanine, lysophosphatidylcholine (LysoPC; 16:1), LysoPC (22:6), linoelaidylcarnitine, palmitoylcarnitine, octadecenoylcarnitine, taurine, arachidonic acid, linoleic acid, oleic acid and palmitic acid, and the downregulation of acetylcarnitine, malate, citrate and succinate. Using MetaboAnalyst 3.0, a number of key metabolic pathways were observed to be acutely perturbed, including valine, leucine and isoleucine biosynthesis, in addition to taurine, hypotaurine, phenylalanine, linoleic acid and arachidonic acid metabolism. The results of the present study provided novel insights into the mechanisms involved in the acclimatization of organisms to hypoxia, and demonstrated the protective mechanism of HPC. PMID:28901489

  12. Hydroxypropyl cellulose as an option for supplementation of cryoprotectant solutions for embryo vitrification in human assisted reproductive technologies.

    PubMed

    Mori, Chiemi; Yabuuchi, Akiko; Ezoe, Kenji; Murata, Nana; Takayama, Yuko; Okimura, Tadashi; Uchiyama, Kazuo; Takakura, Kei; Abe, Hiroyuki; Wada, Keiko; Okuno, Takashi; Kobayashi, Tamotsu; Kato, Keiichi

    2015-06-01

    Hydroxypropyl cellulose (HPC) was investigated as a replacement for serum substitute supplement (SSS) for use in cryoprotectant solutions for embryo vitrification. Mouse blastocysts from inbred (n = 1056), hybrid (n = 128) strains, and 121 vitrified blastocysts donated by infertile patients (n = 102) were used. Mouse and human blastocysts, with or without zona pellucida, were vitrified and warmed in either 1% or 5% HPC or in 5% or 20% SSS-supplemented media using the Cryotop (Kitazato BioPharma Co. Ltd, Fuji, Japan) method, and the survival and oxygen consumption rates were assessed. Viscosity of each vitrification solution was compared. Survival rates of mouse hybrid blastocysts and human zona pellucida-intact blastocysts were comparable among the groups. Mouse and human zona pellucida-free blastocysts, which normally exhibit poor cryoresistance, showed significantly higher survival rates in 5% HPC than 5% SSS (P < 0.05). The 5% HPC-supplemented vitrification solution showed a significantly higher viscosity (P < 0.05). The blastocysts were easily detached from the Cryotop strip during warming when HPC-supplemented vitrification solution was used. The oxygen consumption rates were similar between non-vitrified and 5% HPC groups. The results suggest possible use of HPC for supplementation of cryoprotectant solutions and provide useful information to improve vitrification protocols. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  13. Tuning Linux to meet real time requirements

    NASA Astrophysics Data System (ADS)

    Herbel, Richard S.; Le, Dang N.

    2007-04-01

    There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem. Linux was not originally built to be a real time operating system. There are many places where interrupts can and will be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1]. Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.

  14. New Ultra-Efficient HPC Data Center Debuts | News | NREL

    Science.gov Websites

    . Credit: Dennis Schroeder Scientists and researchers at the U.S. Department of Energy's National Renewable supports the HPC data center and ties its waste heat to the rest of the ESIF. Credit: Dennis Schroeder Warm will be accomplished using NREL's HPC. Credit: Dennis Schroeder Expanding NREL's View into the Unseen

  15. 75 FR 78881 - Airworthiness Directives; Pratt & Whitney PW4000 Series Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ... slots on the 10th stage disk of the high-pressure compressor (HPC) drum rotor disk assembly. This AD... with a ring case configuration rear high-pressure compressor (HPC) installed, that includes a 9th stage... remove the low-pressure turbine shaft, or overhaul the HPC. Most operators will incur no additional costs...

  16. NREL, Sandia, and Johnson Controls See Significant Water Savings for HPC

    Science.gov Websites

    Cooling | Energy Systems Integration Facility | NREL NREL, Sandia and Johnson Controls save 1M Gallons of Water a Year for HPC Cooling NREL, Sandia, and Johnson Controls See Significant Water Savings for HPC Cooling NREL partnered with Sandia National Laboratories and Johnson Controls to install the

  17. Open Radio Communications Architecture Core Framework V1.1.0 Volume 1 Software Users Manual

    DTIC Science & Technology

    2005-02-01

    on a PC utilizing the KDE desktop that comes with Red Hat Linux . The default desktop for most Red Hat Linux installations is the GNOME desktop. The...SCA) v2.2. The software was designed for a desktop computer running the Linux operating system (OS). It was developed in C++, uses ACE/TAO for CORBA...middleware, Xerces for the XML parser, and Red Hat Linux for the Operating System. The software is referred to as, Open Radio Communication

  18. Transcriptome mining of immune-related genes in the muricid snail Concholepas concholepas.

    PubMed

    Détrée, Camille; López-Landavery, Edgar; Gallardo-Escárate, Cristian; Lafarga-De la Cruz, Fabiola

    2017-12-01

    The population of the Chilean endemic marine gastropod Concholepas concholepas locally called "loco" has dramatically decreased in the past 50 years as a result of intense activity of local fisheries and high environmental variability observed along the Chilean coast, including episodes of hypoxia, changes in sea surface temperature, ocean acidification and diseases. In this study, we set out to explore the molecular basis of C. concholepas to cope with biotic stressors such as exposure to the pathogenic bacterium Vibrio anguillarum. Here, 454pyrosequencing was conducted and 61 transcripts related to the immune response in this muricid species were identified. Among these, the expression of six genes (CcNFκβ, CcIκβ, CcLITAF, CcTLR, CcCas8 and CcCath) involved in the regulation of inflammatory, apoptotic and immune processes upon stimuli, were evaluated during the first 33 h post challenge (hpc). The results showed that CcTLR, CcCas8 and CcCath have an initial response at 4 hpc, evidencing an up-regulation from 4 to 24 hpc. Notably, the response of CcNFKB occurred 2 h later with a statistically significant up-regulation at 6 hpc and 10 hpc. Furthermore, the challenge with V. anguillarum induced a statistically significant down-regulation of CcIKB between 2 and 10 hpc as well as a down-regulation of CcLITAF between 2 and 4 hpc followed in both cases by an up-regulation between 24 and 33 hpc. This work describes the first transcriptomic effort to characterize the immune response of C. concholepas and constitutes a valuable transcriptomic resource for future efforts to develop sustainable aquaculture and conservations tools for this endemic marine snail species. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Temperature dependencies of Henry's law constants and octanol/water partition coefficients for key plant volatile monoterpenoids.

    PubMed

    Copolovici, Lucian O; Niinemets, Ulo

    2005-12-01

    To model the emission dynamics and changes in fractional composition of monoterpenoids from plant leaves, temperature dependencies of equilibrium coefficients must be known. Henry's law constants (H(pc), Pa m3 mol(-1) and octanol/water partition coefficients (K(OW), mol mol(-1)) were determined for 10 important plant monoterpenes at physiological temperature ranges (25-50 degrees C for H(pc) and 20-50 degrees C for K(OW)). A standard EPICS procedure was established to determine H(pc) and a shake flask method was used for the measurements of K(OW). The enthalpy of volatilization (deltaH(vol)) varied from 18.0 to 44.3 kJ mol(-1) among the monoterpenes, corresponding to a range of temperature-dependent increase in H(pc) between 1.3- and 1.8-fold per 10 degrees C rise in temperature. The enthalpy of water-octanol phase change varied from -11.0 to -23.8 kJ mol(-1), corresponding to a decrease of K(OW) between 1.15- and 1.32-fold per 10 degrees C increase in temperature. Correlations among physico-chemical characteristics of a wide range of monoterpenes were analyzed to seek the ways of derivation of H(pc) and K(OW) values from other monoterpene physico-chemical characteristics. H(pc) was strongly correlated with monoterpene saturated vapor pressure (P(v)), and for lipophilic monoterpenes, deltaH(vol) scaled positively with the enthalpy of vaporization that characterizes the temperature dependence of P(v) Thus, P(v) versus temperature relations may be employed to derive the temperature relations of H(pc) for these monoterpenes. These data collectively indicate that monoterpene differences in H(pc) and K(OW) temperature relations can importantly modify monoterpene emissions from and deposition on plant leaves.

  20. Emergency Response Capability Baseline Needs Assessment - Requirements Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharry, John A.

    This document was prepared by John A. Sharry, LLNL Fire Marshal and LLNL Division Leader for Fire Protection and reviewed by LLNL Emergency Management Department Head James Colson. The document follows and expands upon the format and contents of the DOE Model Fire Protection Baseline Capabilities Assessment document contained on the DOE Fire Protection Web Site, but only addresses emergency response.

  1. Lawrence Livermore National Laboratory Environmental Report 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosene, Crystal

    The purposes of the Environmental Report 2016 are to record LLNL’s compliance with environmental standards and requirements, describe LLNL’s environmental protection and remediation programs, and present the results of environmental monitoring. Specifically, the report discusses LLNL’s EMS; describes significant accomplishments in pollution prevention; presents the results of air, water, vegetation, and foodstuff monitoring; reports radiological doses from LLNL operations; summarizes LLNL’s activities involving special status wildlife, plants, and habitats; and describes the progress LLNL has made in remediating groundwater contamination. Environmental monitoring at LLNL, including analysis of samples and data, is conducted according to documented standard operating procedures. Duplicate samplesmore » are collected and analytical results are reviewed and compared to internal acceptance standards. This report is prepared for DOE by LLNL’s Environmental Functional Area (EFA). Submittal of the report satisfies requirements under DOE Order 231.1B, “Environment, Safety and Health Reporting,” and DOE Order 458.1, “Radiation Protection of the Public and Environment.” The report is distributed in electronic form and is available to the public at https://saer.llnl.gov/, the website for the LLNL annual environmental report. Previous LLNL annual environmental reports beginning with 1994 are also on the website.« less

  2. Preconditioning in neuroprotection: From hypoxia to ischemia

    PubMed Central

    Li, Sijie; Hafeez, Adam; Noorulla, Fatima; Geng, Xiaokun; Shao, Guo; Ren, Changhong; Lu, Guowei; Zhao, Heng; Ding, Yuchuan; Ji, Xunming

    2017-01-01

    Sublethal hypoxic or ischemic events can improve the tolerance of tissues, organs, and even organisms from subsequent lethal injury caused by hypoxia or ischemia. This phenomenon has been termed hypoxic or ischemic preconditioning (HPC or IPC) and is well established in the heart and the brain. This review aims to discuss HPC and IPC with respect to their historical development and advancements in our understanding of the neurochemical basis for their neuroprotective role. Through decades of collaborative research and studies of HPC and IPC in other organ systems, our understanding of HPC and IPC-induced neuroprotection has expanded to include: early- (phosphorylation targets, transporter regulation, interfering RNA) and late- (regulation of genes like EPO, VEGF, and iNOS) phase changes, regulators of programmed cell death, members of metabolic pathways, receptor modulators, and many other novel targets. The rapid acceleration in our understanding of HPC and IPC will help facilitate transition into the clinical setting. PMID:28110083

  3. Linux thin-client conversion in a large cardiology practice: initial experience.

    PubMed

    Echt, Martin P; Rosen, Jordan

    2004-01-01

    Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.

  4. Report on the Threatened Valley Elderberry Longhorn Beetle and its Elderberry Food Plant at the Lawrence Livermore National Laboratory--Site 300

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Ph.D., R A; Woollett, J

    2004-11-16

    This report describes the results of an entomological survey in 2002 to determine the presence of the federally-listed, threatened Valley Elderberry Longhorn Beetle or ''VELB'' (Desmocerus culifornicus dimorphus: Coleoptera, Cerambycidae) and its elderberry food plant (Sumbucus mexicana: Caprifoliaceae) on the Lawrence Livermore National Laboratory's (LLNL) Experimental Test Site, known as Site 300. In addition, an area located immediately southeast of Site 300, which is owned and managed by the California Department of Fish and Game (CDFG), but secured by LLNL, was also included in this survey. This report will refer to the survey areas as the LLNL-Site 300 and themore » CDFG site. The 2002 survey included mapping the locations of elderberry plants that were observed using a global positioning system (GPS) to obtain positional coordinates for every elderberry plant at Site 300. In addition, observations of VELB adults and signs of their infestation on elderberry plants were also mapped using GPS technology. LLNL requested information on the VELB and its elderberry food plants to update earlier information that had been collected in 1991 (Arnold 1991) as part of the 1992 EIS/EIR for continued operation of LLNL. No VELB adults were observed as part of this prior survey. The findings of the 2002 survey reported herein will be used by LLNL as it updates the expected 2004 Environmental Impact Statement for ongoing operations at LLNL, including Site 300.« less

  5. ISCR Annual Report: Fical Year 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, J R

    2005-03-03

    Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technologymore » enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and long-term visits with the aim of encouraging long-term academic research agendas that address LLNL's research priorities. Through such collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''feet and hands'' that carry those advances into the Laboratory and incorporates them into practice. ISCR research participants are integrated into LLNL's Computing and Applied Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other five institutes of the URP, it navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort.« less

  6. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  7. User Accounts | High-Performance Computing | NREL

    Science.gov Websites

    see information on user account policies. ACCOUNT PASSWORDS Logging in for the first time? Forgot your Accounts User Accounts Learn how to request an NREL HPC user account. Request an HPC Account To request an HPC account, please complete our request form. This form is provided using DocuSign. REQUEST

  8. WinHPC System User Basics | High-Performance Computing | NREL

    Science.gov Websites

    guidance for starting to use this high-performance computing (HPC) system at NREL. Also see WinHPC policies ) when you are finished. Simply quitting Remote Desktop will keep your session active and using resources node). 2. Log in with your NREL.gov username/password. Remember to log out when finished. Mac 1. If you

  9. Multifocal Congenital Hemangiopericytoma.

    PubMed

    Robl, Renata; Carvalho, Vânia Oliveira; Abagge, Kerstin Taniguchi; Uber, Marjorie; Lichtvan, Leniza Costa Lima; Werner, Betina; Mehrdad Nadji, Mehrdad

    2017-01-01

    Congenital hemangiopericytoma (HPC) is a rare mesenchymal tumor with less aggressive behavior and a more favorable prognosis than similar tumors in adults. Multifocal presentation is even less common than isolated HPC and hence its clinical and histologic recognition may be challenging. A newborn infant with multifocal congenital HPC causing severe deformity but with a favorable outcome after chemotherapy and surgical removal is reported. © 2016 Wiley Periodicals, Inc.

  10. One-Time Password Tokens | High-Performance Computing | NREL

    Science.gov Websites

    One-Time Password Tokens One-Time Password Tokens For connecting to NREL's high-performance computing (HPC) systems, learn how to set up a one-time password (OTP) token for remote and privileged a one-time pass code from the HPC Operations team. At the sign-in screen Enter your HPC Username in

  11. Pulmonary metastases of recurrent intracranial hemangiopericytoma diagnosed on fine needle aspiration cytology: a case report.

    PubMed

    Goel, Deepa; Babu, Sasidhara; Prayaga, Aruna K; Sundaram, Challa

    2008-01-01

    Meningeal hemangiopericytoma (HPC) is a rare neoplasm. It is closely related to hemangiopericytomas in systemic tissues, with a tendency to recur and metastasize outside the CNS. Only a few case reports describe the cytomorphologic appearance of these metastasizing lesions, most having primary tumor in deep soft tissues. We report a case of recurrent meningeal HPC metastasizing to lungs. A 48-year-old woman presented with a history of headache. She underwent primary surgery 10 years previously for left parietal tumor. Histopathologic diagnosis was HPC. Radiotherapy was given postoperatively. Brain magnetic resonance imaging (MRI) at admission suggested local recurrence. She also complained of dry cough and shortness of breath. On evaluation, computed tomography (CT) scan lung showed multiple, bilateral, small nodules. Fine needle aspiration cytology (FNAC) of a larger nodule revealed spindle-shaped cells arranged around blood vessels. Immunohistochemistry with CD34 on cell block confirmed metastatic HPC. FNAC is an easy, accurate, relatively noninvasive procedure for diagnosing metastases, especially in patients with a history of recurrent intracranial HPC. Immunohistochemistry on cell block material collected at the time of FNAC may aid in distinguishing HPC from other tumors that are close mimics cytologically.

  12. GraphMeta: Managing HPC Rich Metadata in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less

  13. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  14. Rats with ventral hippocampal damage are impaired at various forms of learning including conditioned inhibition, spatial navigation, and discriminative fear conditioning to similar contexts.

    PubMed

    McDonald, Robert J; Balog, R J; Lee, Justin Q; Stuart, Emily E; Carrels, Brianna B; Hong, Nancy S

    2018-10-01

    The ventral hippocampus (vHPC) has been implicated in learning and memory functions that seem to differ from its dorsal counterpart. The goal of this series of experiments was to provide further insight into the functional contributions of the vHPC. Our previous work implicated the vHPC in spatial learning, inhibitory learning, and fear conditioning to context. However, the specific role of vHPC on these different forms of learning are not clear. Accordingly, we assessed the effects of neurotoxic lesions of the ventral hippocampus on retention of a conditioned inhibitory association, early versus late spatial navigation in the water task, and discriminative fear conditioning to context under high ambiguity conditions. The results showed that the vHPC was necessary for the expression of conditioned inhibition, early spatial learning, and discriminative fear conditioning to context when the paired and unpaired contexts have high cue overlap. We argue that this pattern of effects, combined with previous work, suggests a key role for vHPC in the utilization of broad contextual representations for inhibition and discriminative memory in high ambiguity conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Repeated hematopoietic stem and progenitor cell mobilization without depletion of the bone marrow stem and progenitor cell pool in mice after repeated administration of recombinant murine G-CSF.

    PubMed

    de Kruijf, Evert-Jan F M; van Pel, Melissa; Hagoort, Henny; Kruysdijk, Donnée; Molineux, Graham; Willemze, Roel; Fibbe, Willem E

    2007-05-01

    Administration of recombinant-human G-CSF (rhG-CSF) is highly efficient in mobilizing hematopoietic stem and progenitor cells (HSC/HPC) from the bone marrow (BM) toward the peripheral blood. This study was designed to investigate whether repeated G-CSF-induced HSC/HPC mobilization in mice could lead to a depletion of the bone marrow HSC/HPC pool with subsequent loss of mobilizing capacity. To test this hypothesis Balb/c mice were treated with a maximum of 12 repeated 5-day cycles of either 10 microg rhG-CSF/day or 0.25 microg rmG-CSF/day. Repeated administration of rhG-CSF lead to strong inhibition of HSC/HPC mobilization toward the peripheral blood and spleen after >4 cycles because of the induction of anti-rhG-CSF antibodies. In contrast, after repeated administration of rmG-CSF, HSC/HPC mobilizing capacity remained intact for up to 12 cycles. The number of CFU-GM per femur did not significantly change for up to 12 cycles. We conclude that repeated administration of G-CSF does not lead to depletion of the bone marrow HSC/HPC pool.

  16. Computational Fluid Dynamics Ventilation Study for the Human Powered Centrifuge at the International Space Station

    NASA Technical Reports Server (NTRS)

    Son, Chang H.

    2012-01-01

    The Human Powered Centrifuge (HPC) is a facility that is planned to be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a "bicycle" for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of about two times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin when HPC is operating. A full unsteady formulation is used for airflow and CO2 transport CFD-based modeling with the so-called sliding mesh concept when the HPC equipment with the adjacent Bay 4 cabin volume is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The rotating part of the computational domain includes also a human body model. Localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution detected is discussed.

  17. MRS study of meningeal hemangiopericytoma and edema: a comparison with meningothelial meningioma.

    PubMed

    Righi, Valeria; Tugnoli, Vitaliano; Mucci, Adele; Bacci, Antonella; Bonora, Sergio; Schenetti, Luisa

    2012-10-01

    Intracranial hemangiopericytomas (HPCs) are rare tumors and their radiological appearance resembles that of meningiomas, especially meningothelial meningiomas. To increase the knowledge on the biochemical composition of this type of tumor for better diagnosis and prognosis, we performed a molecular study using ex vivo high resolution magic angle spinning (HR-MAS) magnetic resonance spectroscopy (MRS) perfomed on HPC and peritumoral edematous tissues. Moreover, to help in the discrimination between HPC and meningothelial meningioma we compared the ex vivo HR-MAS spectra of samples from one patient with HPC and 5 patients affected by meningothelial meningioma. Magnetic resonance imaging (MRI), in vivo localized single voxel 1H-MRS was also performed on the same patients prior to surgery and the in vivo and ex vivo MRS spectra were compared. We observed the presence of OH-butyrate, together with glucose in HPC and a low amount of N-acetylaspartate in the edema, that may reflect neuronal alteration responsible for associated epilepsy. Many differences between HPC and meningothelial meningioma were identified. The relative ratios of myo-inositol, glucose and gluthatione with respect to glutamate are higher in HPC compared to meningioma; whereas the relative ratios of creatine, glutamine, alanine, glycine and choline-containing compounds with respect to glutamate are lower in HPC compared to meningioma. These data will be useful to improve the interpretation of in vivo MRS spectra resulting in a more accurate diagnosis of these rare tumors.

  18. Design of cellulose ether-based macromolecular prodrugs of ciprofloxacin for extended release and enhanced bioavailability.

    PubMed

    Amin, Muhammad; Abbas, Nazia Shahana; Hussain, Muhammad Ajaz; Sher, Muhammad; Edgar, Kevin J

    2018-07-01

    The present study reveals the syntheses of hydroxypropylcellulose‑(HPC) and hydroxyethylcellulose‑(HEC) based macromolecular prodrugs (MPDs) of ciprofloxacin (CIP) using homogeneous reaction methodology. Covalently loaded drug content (DC) of each prodrug was quantified using UV-Vis spectrophotometry to determine degree of substitution (DS). HPC-ciprofloxacin (HPC-CIP) conjugates showed DS of CIP in the range 0.87-1.15 whereas HEC-ciprofloxacin (HEC-CIP) conjugates showed DS range 0.51-0.75. Transmission electron microscopy revealed that HPC-CIP conjugate 2 and HEC-CIP conjugate 6 self-assembled into nanoparticles of 150-300 and 180-250nm, respectively. Size exclusion chromatography revealed HPC-CIP conjugate 2 and HEC-CIP conjugate 6 as monodisperse systems. In vitro drug release studies indicated 15 and 43% CIP release from HPC-CIP conjugate 2 after 6h in simulated gastric and simulated intestinal fluids (SGF and SIF), respectively. HEC-CIP conjugate 6 showed 16% and 46% release after 6h in SGF and SIF, respectively. HPC-CIP conjugate 2 and HEC-CIP conjugate 6 exhibited half-lives of 10.87 and 11.71h, respectively with area under the curve values of 164 and 175hμgmL -1 , respectively, indicating enhanced bioavailability and improved pharmacokinetic profiles in animal model. Equal antibacterial activities to that of unmodified CIP confirmed their competitive efficacies. Cytotoxicity studies supported their non-toxic nature and biocompatibility. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Development and Testing of a High-Speed Real-Time Kinematic Precise DGPS Positioning System Between Two Aircraft

    DTIC Science & Technology

    2006-09-01

    work-horse for this thesis. He spent hours writing some of the more tedious code, and as much time helping me learn C++ and Linux . He was always there...compared with C++, and the need to use Linux as the operating system, the filter was coded using C++ and KDevelop [28] in SUSE LINUX Professional 9.2 [42...The driving factor for using Linux was the operating system’s ability to access the serial ports in a reliable fashion. Under the original MATLAB® and

  20. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    PubMed

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  1. ISCR FY2005 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D E; McGraw, J R

    2006-02-02

    Large-scale scientific computation and all of the disciplines that support and help validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of simulation as a fundamental tool of scientific and engineering research is underscored in the President's Information Technology Advisory Committee (PITAC) June 2005 finding that ''computational science has become critical to scientific leadership, economic competitiveness, and nationalmore » security''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed, most notably the molecular dynamics simulation that sustained more than 100 Teraflop/s and won the 2005 Gordon Bell Prize. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use in an efficient manner. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to the core missions of LLNL than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In FY 2005, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for both brief and extended visits with the aim of encouraging long-term academic research agendas that address LLNL research priorities. Through these collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''hands and feet'' that carry those advances into the Laboratory and incorporate them into practice. ISCR research participants are integrated into LLNL's Computing Applications and Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other four institutes of the URP, the ISCR navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort. The pages of this annual report summarize the activities of the faculty members, postdoctoral researchers, students, and guests from industry and other laboratories who participated in LLNL's computational mission under the auspices of the ISCR during FY 2005.« less

  2. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  3. Data Services in Support of High Performance Computing-Based Distributed Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Gichamo, T.; Yildirim, A. A.; Jones, N.

    2014-12-01

    We have developed web-based data services to support the application of hydrologic models on High Performance Computing (HPC) systems. The purposes of these services are to provide hydrologic researchers, modelers, water managers, and users access to HPC resources without requiring them to become HPC experts and understanding the intrinsic complexities of the data services, so as to reduce the amount of time and effort spent in finding and organizing the data required to execute hydrologic models and data preprocessing tools on HPC systems. These services address some of the data challenges faced by hydrologic models that strive to take advantage of HPC. Needed data is often not in the form needed by such models, requiring researchers to spend time and effort on data preparation and preprocessing that inhibits or limits the application of these models. Another limitation is the difficult to use batch job control and queuing systems used by HPC systems. We have developed a REST-based gateway application programming interface (API) for authenticated access to HPC systems that abstracts away many of the details that are barriers to HPC use and enhances accessibility from desktop programming and scripting languages such as Python and R. We have used this gateway API to establish software services that support the delineation of watersheds to define a modeling domain, then extract terrain and land use information to automatically configure the inputs required for hydrologic models. These services support the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation and generation of hydrology-based terrain information such as wetness index and stream networks. These services also support the derivation of inputs for the Utah Energy Balance snowmelt model used to address questions such as how climate, land cover and land use change may affect snowmelt inputs to runoff generation. To enhance access to the time varying climate data used to drive hydrologic models, we have developed services to downscale and re-grid nationally available climate analysis data from systems such as NLDAS and MERRA. These cases serve as examples for how this approach can be extended to other models to enhance the use of HPC for hydrologic modeling.

  4. Hyperspectral Sensors Final Report CRADA No. TC02173.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priest, R. E.; Sauvageau, J. E.

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Science Applications International Corporation (SAIC), National Security Space Operations/SRBU, to develop longwave infrared (LWIR) hyperspectral imaging (HSI) sensors for airborne and potentially ground and space, platforms. LLNL has designed and developed LWIR HSI sensors since 1995. The current generation of these sensors has applications to users within the U.S. Department of Defense and the Intelligence Community. User needs are for multiple copies provided by commercial industry. To gain the most benefit from the U.S. Government’s prior investments inmore » LWIR HSI sensors developed at LLNL, transfer of technology and know-how from LLNL HSI experts to commercial industry was needed. The overarching purpose of the CRADA project was to facilitate the transfer of the necessary technology from LLNL to SAIC thereby allowing the U.S. Government to procure LWIR HSI sensors from this company.« less

  5. Lawrence Livermore National Laboratory environmental report for 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sims, J.M.; Surano, K.A.; Lamson, K.C.

    1990-01-01

    This report documents the results of the Environmental Monitoring Program at the Lawrence Livermore National Laboratory (LLNL) and presents summary information about environmental compliance for 1990. To evaluate the effect of LLNL operations on the local environment, measurements of direct radiation and a variety of radionuclides and chemical compounds in ambient air, soil, sewage effluent surface water, groundwater, vegetation, and foodstuff were made at both the Livermore site and at Site 300 nearly. LLNL's compliance with all applicable guides, standards, and limits for radiological and nonradiological emissions to the environment was evaluated. Aside from an August 13 observation of silvermore » concentrations slightly above guidelines for discharges to the sanitary sewer, all the monitoring data demonstrated LLNL compliance with environmental laws and regulations governing emission and discharge of materials to the environment. In addition, the monitoring data demonstrated that the environmental impacts of LLNL are minimal and pose no threat to the public to or to the environment. 114 refs., 46 figs., 79 tabs.« less

  6. Feasibility of Wide-Area Decontamination of Bacillus anthracis Spores Using a Germination-Lysis Approach

    DTIC Science & Technology

    2011-11-16

    Security, LLC 2011 CBD S& T Conference November 16, 2011 LLNL-PRES-508394 Lawrence Livermore National Laboratory LLNL-PRES-  Background...PRES-  Gruinard Island 5% formaldehyde  Sverdlosk Release UNKNOWN: but washing, chloramines , soil disposal believed to have been used...508394 Lawrence Livermore National Laboratory LLNL-PRES- 4 Disinfectant >6 Log Reduction on Materials (EPA, 2010a,b; Wood et al., 2011

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, Robert C.

    Following the January 1980 earthquake that was felt at Lawrence Livermore National Laboratory (LLNL), a network of strong-motion accelerographs was installed at LLNL. Prior to the 1980 earthquake, there were no accelerographs installed. The ground motion from the 1980 earthquake was estimated from USGS instruments around the Laboratory to be between 0.2 – 0.3 g horizontal peak ground acceleration. These instruments were located at the Veterans Hospital, 5 miles southwest of LLNL, and in San Ramon, about 12 miles west of LLNL. In 2011, the Department of Energy (DOE) requested to know the status of our seismic instruments. We conductedmore » a survey of our instrumentation systems and responded to DOE in a letter. During this survey, it was found that the recorders in Buildings 111 and 332 were not operational. The instruments on Nova had been removed, and only three of the 10 NIF instruments installed in 2005 were operational (two were damaged and five had been removed from operation at the request of the program). After the survey, it was clear that the site seismic instrumentation had degraded substantially and would benefit from an overhaul and more attention to ongoing maintenance. LLNL management decided to update the LLNL seismic instrumentation system. The updated system is documented in this report.« less

  8. 2004 Environmental Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Althouse, P E; Bertoldo, N A; Brown, R A

    2005-09-28

    The Lawrence Livermore National Laboratory (LLNL) annual Environmental Report, prepared for the Department of Energy (DOE) and made available to the public, presents summary environmental data that characterizes site environmental management performance, summarizes environmental occurrences and responses reported during the calendar year, confirms compliance with environmental standards and requirements, and highlights significant programs and efforts. By explaining the results of effluent and environmental monitoring, mentioning environmental performance indicators and performance measure programs, and assessing the impact of Laboratory operations on the environment and the public, the report also demonstrates LLNL's continuing commitment to minimize any potentially adverse impact of itsmore » operations. The combination of environmental and effluent monitoring, source characterization, and dose assessment showed that radiological doses to the public caused by LLNL operations in 2004 were less than 0.26% of regulatory standards and more than 11,000 times smaller than dose from natural background. Analytical results and evaluations generally showed continuing low levels of most contaminants; remediation efforts further reduced the concentrations of contaminants of concern in groundwater and soil vapor. In addition, LLNL's extensive environmental compliance activities related to water, air, endangered species, waste, wastewater, and waste reduction controlled or reduced LLNL's effects on the environment. LLNL's environmental program clearly demonstrates a commitment to protecting the environment from operational impacts.« less

  9. Vapor deposition polymerization of aniline on 3D hierarchical porous carbon with enhanced cycling stability as supercapacitor electrode

    NASA Astrophysics Data System (ADS)

    Zhao, Yufeng; Zhang, Zhi; Ren, Yuqin; Ran, Wei; Chen, Xinqi; Wu, Jinsong; Gao, Faming

    2015-07-01

    In this work, a polyaniline coated hierarchical porous carbon (HPC) composite (PANI@HPC) is developed using a vapor deposition polymerization technique. The as synthesized composite is applied as the supercapacitor electrode material, and presents a high specific capacitance of 531 F g-1 at current density of 0.5 A g-1 and superior cycling stability of 96.1% (after 10,000 charge-discharge cycles at current density of 10 A g-1). This can be attributed to the maximized synergistic effect of PANI and HPC. Furthermore, an aqueous symmetric supercapacitor device based on PANI@HPC is fabricated, demonstrating a high specific energy of 17.3 Wh kg-1.

  10. Mixing HTC and HPC Workloads with HTCondor and Slurm

    NASA Astrophysics Data System (ADS)

    Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2017-10-01

    Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.

  11. Development of a Dynamic Time Sharing Scheduled Environment Final Report CRADA No. TC-824-94E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.; Caliga, D.

    Massively parallel computers, such as the Cray T3D, have historically supported resource sharing solely with space sharing. In that method, multiple problems are solved by executing them on distinct processors. This project developed a dynamic time- and space-sharing scheduler to achieve greater interactivity and throughput than could be achieved with space-sharing alone. CRI and LLNL worked together on the design, testing, and review aspects of this project. There were separate software deliverables. CFU implemented a general purpose scheduling system as per the design specifications. LLNL ported the local gang scheduler software to the LLNL Cray T3D. In this approach, processorsmore » are allocated simultaneously to aU components of a parallel program (in a “gang”). Program execution is preempted as needed to provide for interactivity. Programs are also reIocated to different processors as needed to efficiently pack the computer’s torus of processors. In phase one, CRI developed an interface specification after discussions with LLNL for systemlevel software supporting a time- and space-sharing environment on the LLNL T3D. The two parties also discussed interface specifications for external control tools (such as scheduling policy tools, system administration tools) and applications programs. CRI assumed responsibility for the writing and implementation of all the necessary system software in this phase. In phase two, CRI implemented job-rolling on the Cray T3D, a mechanism for preempting a program, saving its state to disk, and later restoring its state to memory for continued execution. LLNL ported its gang scheduler to the LLNL T3D utilizing the CRI interface implemented in phases one and two. During phase three, the functionality and effectiveness of the LLNL gang scheduler was assessed to provide input to CRI time- and space-sharing, efforts. CRI will utilize this information in the development of general schedulers suitable for other sites and future architectures.« less

  12. 10 CFR 850 Implementation of Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S

    2012-01-05

    10 CFR 850 defines a contractor as any entity, including affiliated entities, such as a parent corporation, under contract with DOE, including a subcontractor at any tier, with responsibility for performing work at a DOE site in furtherance of a DOE mission. The Chronic Beryllium Disease Prevention Program (CBDPP) applies to beryllium-related activities that are performed at the Lawrence Livermore National Laboratory (LLNL). The CBDPP or Beryllium Safety Program is integrated into the LLNL Worker Safety and Health Program and, thus, implementation documents and responsibilities are integrated in various documents and organizational structures. Program development and management of the CBDPPmore » is delegated to the Environment, Safety and Health (ES&H) Directorate, Worker Safety and Health Functional Area. As per 10 CFR 850, Lawrence Livermore National Security, LLC (LLNS) periodically submits a CBDPP to the National Nuclear Security Administration/Livermore Site Office (NNSA/LSO). The requirements of this plan are communicated to LLNS workers through ES&H Manual Document 14.4, 'Working Safely with Beryllium.' 10 CFR 850 is implemented by the LLNL CBDPP, which integrates the safety and health standards required by the regulation, components of the LLNL Integrated Safety Management System (ISMS), and incorporates other components of the LLNL ES&H Program. As described in the regulation, and to fully comply with the regulation, specific portions of existing programs and additional requirements are identified in the CBDPP. The CBDPP is implemented by documents that interface with the workers, principally through ES&H Manual Document 14.4. This document contains information on how the management practices prescribed by the LLNL ISMS are implemented, how beryllium hazards that are associated with LLNL work activities are controlled, and who is responsible for implementing the controls. Adherence to the requirements and processes described in the ES&H Manual ensures that ES&H practices across LLNL are developed in a consistent manner. Other implementing documents, such as the ES&H Manual, are integral in effectively implementing 10 CFR 850.« less

  13. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2002-12-19

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  14. SLURM: Simplex Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2003-04-22

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  15. 77 FR 5864 - BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-06

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano Superlattice Technology, Inc.; Order of Suspension of... current and accurate information concerning the securities of BluePoint Linux Software Corp. because it...

  16. High-capacity adsorption of Cr(VI) from aqueous solution using a hierarchical porous carbon obtained from pig bone.

    PubMed

    Wei, Shaochen; Li, Dongtian; Huang, Zhe; Huang, Yaqin; Wang, Feng

    2013-04-01

    A hierarchical porous carbon obtained from pig bone (HPC) was utilized as the adsorbent for removal of Cr(VI) from aqueous solution. The effects of solution pH value, concentration of Cr(VI), and adsorption temperature on the removal of Cr(VI) were investigated. The experimental data of the HPC fitted well with the Langmuir isotherm and its adsorption kinetic followed pseudo-second order model. Compared with a commercial activated carbon adsorbent (Norit CGP), the HPC showed an high adsorption capability for Cr(VI). The maximum Cr(VI) adsorption capacity of the HPC was 398.40 mg/g at pH 2. It is found that a part of the Cr(VI) was reduced to Cr(III) on the adsorbent surface from desorption experiment data. The regeneration showed adsorption capacity of the HPC can still achieve 92.70 mg/g even after fifth adsorption cycle. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, Earl C.; Conway, Steve; Dekate, Chirag

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. Amore » new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.« less

  18. Quantifying Scheduling Challenges for Exascale System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R

    2015-01-01

    The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solvingmore » scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.« less

  19. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less

  20. Humic acids-based hierarchical porous carbons as high-rate performance electrodes for symmetric supercapacitors.

    PubMed

    Qiao, Zhi-jun; Chen, Ming-ming; Wang, Cheng-yang; Yuan, Yun-cai

    2014-07-01

    Two kinds of hierarchical porous carbons (HPCs) with specific surface areas of 2000 m(2)g(-1) were synthesized using leonardite humic acids (LHA) or biotechnology humic acids (BHA) precursors via a KOH activation process. Humic acids have a high content of oxygen-containing groups which enabled them to dissolve in aqueous KOH and facilitated the homogeneous KOH activation. The LHA-based HPC is made up of abundant micro-, meso-, and macropores and in 6M KOH it has a specific capacitance of 178 F g(-1) at 100 Ag(-1) and its capacitance retention on going from 0.05 to 100 A g(-1) is 64%. In contrast, the BHA-based HPC exhibits a lower capacitance retention of 54% and a specific capacitance of 157 F g(-1) at 100 A g(-1) which is due to the excessive micropores in the BHA-HPC. Moreover, LHA-HPC is produced in a higher yield than BHA-HPC (51 vs. 17 wt%). Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Kernel-based Linux emulation for Plan 9.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9.more » In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.« less

  2. Open discovery: An integrated live Linux platform of Bioinformatics tools

    PubMed Central

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery ‐ a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. Availability The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in PMID:19238235

  3. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    DTIC Science & Technology

    2009-04-01

    threatened by global competition. It is essential that these suppliers remain competitive and maintain their technological advantage . In this increasingly...place themselves, as well as customers who rely on them, in competitive jeopardy. Despite the potential competitive advantage associated with adopting...computing users into the HPC fold and to enable more entry-level users to exploit HPC more fully for competitive advantage . About half of the surveyed

  4. 76 FR 82103 - Airworthiness Directives; General Electric Company (GE) GE90-110B1 and GE90-115B Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-30

    ... of small pieces from the HPC stages 1-2 seal teeth and two shop findings of cracks in the seal teeth... stages 1-2 seal teeth of the HPC stages 2-5 spool for cracks. This AD only allows installation of either HPC stator stage 1 interstage seals that are pregrooved or previously worn seals with acceptable wear...

  5. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  6. 77 FR 10952 - Airworthiness Directives; CFM International S.A. Model CFM56 Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    .../N) and serial number (SN) high-pressure compressor (HPC) 4-9 spools installed. In Table 1 of the AD, the HPC 4-9 spool SN GWN05AMO in the 2nd column of the Table is incorrect. This document corrects that...), currently requires removing certain HPC 4-9 spools listed by P/N and SN in the AD. As published, in Table 1...

  7. CD271 regulates the proliferation and motility of hypopharyngeal cancer cells.

    PubMed

    Mochizuki, Mai; Tamai, Keiichi; Imai, Takayuki; Sugawara, Sayuri; Ogama, Naoko; Nakamura, Mao; Matsuura, Kazuto; Yamaguchi, Kazunori; Satoh, Kennichi; Sato, Ikuro; Motohashi, Hozumi; Sugamura, Kazuo; Tanaka, Nobuyuki

    2016-07-29

    CD271 (p75 neurotrophin receptor) plays both positive and negative roles in cancer development, depending on the cell type. We previously reported that CD271 is a marker for tumor initiation and is correlated with a poor prognosis in human hypopharyngeal cancer (HPC). To clarify the role of CD271 in HPC, we established HPC cell lines and knocked down the CD271 expression using siRNA. We found that CD271-knockdown completely suppressed the cells' tumor-forming capability both in vivo and in vitro. CD271-knockdown also induced cell-cycle arrest in G0 and suppressed ERK phosphorylation. While treatment with an ERK inhibitor only partially inhibited cell growth, CDKN1C, which is required for maintenance of quiescence, was strongly upregulated in CD271-depleted HPC cells, and the double knockdown of CD271 and CDKN1C partially rescued the cells from G0 arrest. In addition, either CD271 depletion or the inhibition of CD271-RhoA signaling by TAT-Pep5 diminished the in vitro migration capability of the HPC cells. Collectively, CD271 initiates tumor formation by increasing the cell proliferation capacity through CDKN1C suppression and ERK-signaling activation, and by accelerating the migration signaling pathway in HPC.

  8. CD271 regulates the proliferation and motility of hypopharyngeal cancer cells

    PubMed Central

    Mochizuki, Mai; Tamai, Keiichi; Imai, Takayuki; Sugawara, Sayuri; Ogama, Naoko; Nakamura, Mao; Matsuura, Kazuto; Yamaguchi, Kazunori; Satoh, Kennichi; Sato, Ikuro; Motohashi, Hozumi; Sugamura, Kazuo; Tanaka, Nobuyuki

    2016-01-01

    CD271 (p75 neurotrophin receptor) plays both positive and negative roles in cancer development, depending on the cell type. We previously reported that CD271 is a marker for tumor initiation and is correlated with a poor prognosis in human hypopharyngeal cancer (HPC). To clarify the role of CD271 in HPC, we established HPC cell lines and knocked down the CD271 expression using siRNA. We found that CD271-knockdown completely suppressed the cells’ tumor-forming capability both in vivo and in vitro. CD271-knockdown also induced cell-cycle arrest in G0 and suppressed ERK phosphorylation. While treatment with an ERK inhibitor only partially inhibited cell growth, CDKN1C, which is required for maintenance of quiescence, was strongly upregulated in CD271-depleted HPC cells, and the double knockdown of CD271 and CDKN1C partially rescued the cells from G0 arrest. In addition, either CD271 depletion or the inhibition of CD271-RhoA signaling by TAT-Pep5 diminished the in vitro migration capability of the HPC cells. Collectively, CD271 initiates tumor formation by increasing the cell proliferation capacity through CDKN1C suppression and ERK-signaling activation, and by accelerating the migration signaling pathway in HPC. PMID:27469492

  9. The Effect of Apatinib on the Metabolism of Carvedilol Both in vitro and in vivo.

    PubMed

    Lin, Dan; Wang, Zhe; Li, Junwei; Wang, Li; Wang, Shuanghu; Hu, Guo-Xin; Liu, Xinshe

    2016-01-01

    In light of the growing number of cancer survivors, the incidence of cardiovascular complications in these patients had also increased, while the effect of apatinib on the pharmacokinetic of cardioprotective drug (carvedilol) in rats or human is still unknown. The present work was to study the impact of apatinib on the metabolism of carvedilol both in vitro and vivo. A specific and sensitive ultra-performance liquid-chromatography tandem mass spectrometry method was applied to determine the concentration of carvedilol and its metabolites (4'-hydroxyphenyl carvedilol [4'-HPC], 5'-hydroxyphenyl carvedilol [5'-HPC] and o-desmethyl carvedilol [o-DMC]). The inhibition ratios in human liver microsomes were 10.28, 10.89 and 5.94% for 4'-HPC, 5'-HPC and o-DMC, respectively, while in rat liver microsomes, they were 3.22, 1.58 and 1.81%, respectively. The data in vitro of rat microsomes were consistent with the data in vivo that the inhibition of 4'-HPC and 5'-HPC formation was higher than the control group. Our study showed that apatinib could significantly inhibit the formation of carvedilol metabolites both in human and rat liver microsomes. It is recommended that the effect of apatinib on the metabolism of carvedilol should be noted and carvedilol plasma concentration should be monitored. © 2015 S. Karger AG, Basel.

  10. The ventral hippocampus, but not the dorsal hippocampus is critical for learned approach-avoidance decision making.

    PubMed

    Schumacher, Anett; Vlassov, Ekaterina; Ito, Rutsuko

    2016-04-01

    The resolution of an approach-avoidance conflict induced by ambivalent information involves the appraisal of the incentive value of the outcomes and associated stimuli to orchestrate an appropriate behavioral response. Much research has been directed at delineating the neural circuitry underlying approach motivation and avoidance motivation separately. Very little research, however, has examined the neural substrates engaged at the point of decision making when opposing incentive motivations are experienced simultaneously. We hereby examine the role of the dorsal and ventral hippocampus (HPC) in a novel approach-avoidance decision making paradigm, revisiting a once popular theory of HPC function, which posited the HPC to be the driving force of a behavioral inhibition system that is activated in situations of imminent threat. Rats received pre-training excitotoxic lesions of the dorsal or ventral HPC, and were trained to associate different non-spatial cues with appetitive, aversive and neutral outcomes in three separate arms of the radial maze. On the final day of testing, a state of approach-avoidance conflict was induced by simultaneously presenting two cues of opposite valences, and comparing the time the rats spent interacting with the superimposed 'conflict' cue, and the neutral cue. The ventral HPC-lesioned group showed significant preference for the conflict cue over the neutral cue, compared to the dorsal HPC-lesioned, and control groups. Thus, we provide evidence that the ventral, but not dorsal HPC, is a crucial component of the neural circuitry concerned with exerting inhibitory control over approach tendencies under circumstances in which motivational conflict is experienced. © 2015 Wiley Periodicals, Inc.

  11. The Research on Linux Memory Forensics

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  12. Downregulation of Calcium-Binding Protein S100A9 Inhibits Hypopharyngeal Cancer Cell Proliferation and Invasion Ability Through Inactivation of NF-κB Signaling.

    PubMed

    Wu, Ping; Quan, Huatao; Kang, Jing; He, Jian; Luo, Shi; Xie, Chubo; Xu, Jing; Tang, Yaoyun; Zhao, Suping

    2017-11-02

    Hypopharyngeal cancer (HPC) frequently presents at an advanced stage and displays early submucosal spread, resulting in a poor prognosis. It is among the worst of all cancers in the head and neck subsites. Therefore, detection of HPC at an earlier stage would be beneficial to patients. In this study, we used differential in-gel electrophoresis (DIGE) and two-dimensional polyacrylamide gel electrophoresis (2-DE) proteomics analysis to identify the potential biomarkers for HPC. Among the differential proteins identified, calcium-binding protein S100A9 was overexpressed in HPC tissues compared with normal adjacent tissues, and S100A9 expression in metastatic tissues and advanced tumor tissues was higher than in nonmetastatic tissues and early tumor tissues. S100A9 expression was further confirmed in a large additional cohort. Our data showed that a higher S100A9 level was associated with a poor prognosis for HPC patients, and this may be an independent factor for predicting their prognosis. In addition, S100A9 protein expression was upregulated in human HPC cell lines compared with normal oral cavity epithelia. Knockdown of S100A9 induced significant inhibition of cell growth and their invasive ability. Mechanically, we found that downregulation of S100A9 significantly reduced the expression of NF-κB, phosphorylation of NF-κB and Bcl-2, as well as the expression of MMP7 and MMP2. Restoration of NF-κB expression sufficiently reversed the inhibitory effects on cell proliferation and invasion induced by S100A9 downregulation in vitro and in vivo. In conclusion, for the first time, we have identified S100A9 as an independent prognostic factor for HPC. Inhibiting S100A9 expression would be a potential novel diagnostic biomarker and therapeutic target for HPC treatment.

  13. Implication of multiple mechanisms in apoptosis induced by the synthetic retinoid CD437 in human prostate carcinoma cells.

    PubMed

    Sun, S Y; Yue, P; Lotan, R

    2000-09-14

    The synthetic retinoid 6-[3-(1-adamantyl)-4-hydroxyphenyl]-2-naphthalene carboxylic acid (CD437) induces apoptosis in several types of cancer cell. CD437 inhibited the growth of both androgen-dependent and -independent human prostate carcinoma (HPC) cells in a concentration-dependent manner by rapid induction of apoptosis. CD437 was more effective in killing androgen-independent HPC cells such as DU145 and PC-3 than the androgen-dependent LNCaP cells. The caspase inhibitors Z-VAD-FMK and Z-DEVD-FMK blocked apoptosis induced by CD437 in DU145 and LNCaP cells, in which increased caspase-3 activity and PARP cleavage were observed, but not in PC-3 cells, in which CD437 did not induce caspase-3 activation and PARP cleavage. Thus, CD437 can induce either caspase-dependent or caspase-independent apoptosis in HPC cells. CD437 increased the expression of c-Myc, c-Jun, c-Fos, and death receptors DR4, DR5 and Fas. CD437's potency in apoptosis induction in the different cell lines was correlated with its effects on the expression of oncogenes and death receptors, thus implicating these genes in CD437-induced apoptosis in HPC cells. However, the importance and contribution of each of these genes in different HPC cell lines may vary. Because CD437 induced the expression of DR4, DR5 and Fas, we examined the effects of combining CD437 and tumor necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL) and Fas ligand, respectively, in HPC cells. We found synergistic induction of apoptosis, highlighting the importance of the modulation of these death receptors in CD437-induced apoptosis in HPC cells. This result also suggests a potential strategy of using CD437 with TRAIL for treatment of HPC. Oncogene (2000) 19, 4513 - 4522.

  14. Suppression of Neurotoxic Lesion-Induced Seizure Activity: Evidence for a Permanent Role for the Hippocampus in Contextual Memory

    PubMed Central

    Sparks, Fraser T.; Lehmann, Hugo; Hernandez, Khadaryna; Sutherland, Robert J.

    2011-01-01

    Damage to the hippocampus (HPC) using the excitotoxin N-methyl-D-aspartate (NMDA) can cause retrograde amnesia for contextual fear memory. This amnesia is typically attributed to loss of cells in the HPC. However, NMDA is also known to cause intense neuronal discharge (seizure activity) during the hours that follow its injection. These seizures may have detrimental effects on retrieval of memories. Here we evaluate the possibility that retrograde amnesia is due to NMDA-induced seizure activity or cell damage per se. To assess the effects of NMDA induced activity on contextual memory, we developed a lesion technique that utilizes the neurotoxic effects of NMDA while at the same time suppressing possible associated seizure activity. NMDA and tetrodotoxin (TTX), a sodium channel blocker, are simultaneously infused into the rat HPC, resulting in extensive bilateral damage to the HPC. TTX, co-infused with NMDA, suppresses propagation of seizure activity. Rats received pairings of a novel context with foot shock, after which they received NMDA-induced, TTX+NMDA-induced, or no damage to the HPC at a recent (24 hours) or remote (5 weeks) time point. After recovery, the rats were placed into the shock context and freezing was scored as an index of fear memory. Rats with an intact HPC exhibited robust memory for the aversive context at both time points, whereas rats that received NMDA or NMDA+TTX lesions showed a significant reduction in learned fear of equal magnitude at both the recent and remote time points. Therefore, it is unlikely that observed retrograde amnesia in contextual fear conditioning are due to disruption of non-HPC networks by propagated seizure activity. Moreover, the memory deficit observed at both time points offers additional evidence supporting the proposition that the HPC has a continuing role in maintaining contextual memories. PMID:22110648

  15. The Nucleus Reuniens Controls Long-Range Hippocampo-Prefrontal Gamma Synchronization during Slow Oscillations.

    PubMed

    Ferraris, Maëva; Ghestem, Antoine; Vicente, Ana F; Nallet-Khosrofian, Lauriane; Bernard, Christophe; Quilichini, Pascale P

    2018-03-21

    Gamma oscillations are involved in long-range coupling of distant regions that support various cognitive operations. Here we show in adult male rats that synchronized bursts of gamma oscillations bind the hippocampus (HPC) and prefrontal cortex (mPFC) during slow oscillations and slow-wave sleep, a brain state that is central for consolidation of memory traces. These gamma bursts entrained the firing of the local HPC and mPFC neuronal populations. Neurons of the nucleus reuniens (NR), which is a structural and functional hub between HPC and mPFC, demonstrated a specific increase in their firing before gamma burst onset, suggesting their involvement in HPC-mPFC binding. Chemical inactivation of NR disrupted the temporal pattern of gamma bursts and their synchronization, as well as mPFC neuronal firing. We propose that the NR drives long-range hippocampo-prefrontal coupling via gamma bursts providing temporal windows for information exchange between the HPC and mPFC during slow-wave sleep. SIGNIFICANCE STATEMENT Long-range coupling between hippocampus (HPC) and prefrontal cortex (mPFC) is believed to support numerous cognitive functions, including memory consolidation occurring during sleep. Gamma-band synchronization is a fundamental process in many neuronal operations and is instrumental in long-range coupling. Recent evidence highlights the role of nucleus reuniens (NR) in consolidation; however, how it influences hippocampo-prefrontal coupling is unknown. In this study, we show that HPC and mPFC are synchronized by gamma bursts during slow oscillations in anesthesia and natural sleep. By manipulating and recording the NR-HPC-mPFC network, we provide evidence that the NR actively promotes this long-range gamma coupling. This coupling provides the hippocampo-prefrontal circuit with a novel mechanism to exchange information during slow-wave sleep. Copyright © 2018 the authors 0270-6474/18/383026-13$15.00/0.

  16. Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, Charles J.; Shi, Xizeng

    The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executablemore » code.« less

  17. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less

  18. The Role of GGAP2 in Prostate Cancer

    DTIC Science & Technology

    2009-03-01

    show that GGAP2 protein expression is increased in HPC in both HPC cell lines and clinical patient samples. Biochemical studies indicate that GGAP2...GTPase domain of GGAP2 and enhance its effects on cancer growth. PC3 cells stably expressing wild-type GGAP2 form larger volume tumors in nude mice...reducing HPC incidence and slow down cancer development. 15. SUBJECT TERMS GGAP2, human PC, mutation 16. SECURITY CLASSIFICATION OF: 17

  19. A Long History of Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  20. Modular HPC I/O characterization with Darshan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Shane; Carns, Philip; Harms, Kevin

    2016-11-13

    Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientificmore » applications and computing platforms calls for greater flexibililty and scope in I/O characterization.« less

  1. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE PAGES

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...

    2017-02-11

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  2. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  3. Real-time data collection in Linux: a case study.

    PubMed

    Finney, S A

    2001-05-01

    Multiuser UNIX-like operating systems such as Linux are often considered unsuitable for real-time data collection because of the potential for indeterminate timing latencies resulting from preemptive scheduling. In this paper, Linux is shown to be fully adequate for precisely controlled programming with millisecond resolution or better. The Linux system calls that subserve such timing control are described and tested and then utilized in a MIDI-based program for tapping and music performance experiments. The timing of this program, including data input and output, is shown to be accurate at the millisecond level. This demonstrates that Linux, with proper programming, is suitable for real-time experiment software. In addition, the detailed description and test of both the operating system facilities and the application program itself may serve as a model for publicly documenting programming methods and software performance on other operating systems.

  4. International Safeguards Technology and Policy Education and Training Pilot Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreicer, M; Anzelon, G A; Essner, J T

    2009-06-16

    A major focus of the National Nuclear Security Administration-led Next Generation Safeguards Initiative (NGSI) is the development of human capital to meet present and future challenges to the safeguards regime. An effective university-level education in safeguards and related disciplines is an essential element in a layered strategy to rebuild the safeguards human resource capacity. NNSA launched two pilot programs in 2008 to develop university level courses and internships in association with James, Martin Center for Nonproliferation Studies (CNS) at the Monterey Institute of International Studies (MIIS) and Texas A&M University (TAMU). These pilot efforts involved 44 students in total andmore » were closely linked to hands-on internships at Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). The Safeguards and Nuclear Material Management pilot program was a collaboration between TAMU, LANL, and LLNL. The LANL-based coursework was shared with the students undertaking internships at LLNL via video teleconferencing. A weeklong hands-on exercise was also conducted at LANL. A second pilot effort, the International Nuclear Safeguards Policy and Information Analysis pilot program was implemented at MIIS in cooperation with LLNL. Speakers from MIIS, LLNL, and other U.S. national laboratories (LANL, BNL) delivered lectures for the audience of 16 students. The majority of students were senior classmen or new master's degree graduates from MIIS specializing in nonproliferation policy studies. The two pilots programs concluded with an NGSI Summer Student Symposium, held at LLNL, where 20 students participated in LLNL facility tours and poster sessions. The value of bringing together the students from the technical and policy pilots was notable and will factor into the planning for the continued refinement of the two programs in the coming years.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peck, T; Sparkman, D; Storch, N

    ''The LLNL Site-Specific Advanced Simulation and Computing (ASCI) Software Quality Engineering Recommended Practices VI.I'' document describes a set of recommended software quality engineering (SQE) practices for ASCI code projects at Lawrence Livermore National Laboratory (LLNL). In this context, SQE is defined as the process of building quality into software products by applying the appropriate guiding principles and management practices. Continual code improvement and ongoing process improvement are expected benefits. Certain practices are recommended, although projects may select the specific activities they wish to improve, and the appropriate time lines for such actions. Additionally, projects can rely on the guidance ofmore » this document when generating ASCI Verification and Validation (VSrV) deliverables. ASCI program managers will gather information about their software engineering practices and improvement. This information can be shared to leverage the best SQE practices among development organizations. It will further be used to ensure the currency and vitality of the recommended practices. This Overview is intended to provide basic information to the LLNL ASCI software management and development staff from the ''LLNL Site-Specific ASCI Software Quality Engineering Recommended Practices VI.I'' document. Additionally the Overview provides steps to using the ''LLNL Site-Specific ASCI Software Quality Engineering Recommended Practices VI.I'' document. For definitions of terminology and acronyms, refer to the Glossary and Acronyms sections in the ''LLNL Site-Specific ASCI Software Quality Engineering Recommended Practices VI.I''.« less

  6. IGPP 1999-2000 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryerson, F J; Cook, K; Hitchcock, B

    2003-01-27

    The Institute of Geophysics and Planetary Physics (IGPP) is a Multicampus Research Unit of the University of California (UC). IGPP was founded in 1946 at UC Los Angeles with a charter to further research in the earth and planetary sciences and related fields. The Institute now has branches at UC campuses in Irvine, Los Angeles, San Diego, Santa Cruz and Riverside, and at Los Alamos National Laboratory and Lawrence Livermore National Laboratory. The University-wide IGPP has played an important role in establishing interdisciplinary research in the earth and planetary sciences. For example, IGPP was instrumental in founding the fields ofmore » physical oceanography and space physics, which at the time fell between the cracks of established university departments. Because of its multicampus orientation, IGPP has sponsored important inter-institutional consortia in the earth and planetary sciences. Each of the seven branches has a somewhat different intellectual emphasis as a result of the interplay between strengths of campus departments and Laboratory programs. The IGPP branch at Lawrence Livermore National Laboratory (LLNL) was approved by the Regents of the University of California in 1982. IGPP-LLNL emphasizes research in tectonics, geochemistry, and astrophysics. It provides a venue for studying the fundamental aspects of these fields, thereby complementing LLNL programs that pursue applications of these disciplines in national security and energy research. IGPP-LLNL was directed by Charles Alcock during this period and was originally organized into three centers: Geosciences, stressing seismology; High-Pressure Physics, stressing experiments using the two-stage light-gas gun at LLNL; and Astrophysics, stressing theoretical and computational astrophysics. In 1994, the activities of the Center for High-Pressure Physics were merged with those of the Center for Geosciences. The Center for Geosciences, headed by Frederick Ryerson, focuses on research in geophysics and geochemistry. The Astrophysics Research Center, headed by Kem Cook, provides a home for theoretical and observational astrophysics and serves as an interface with the Physics Directorate's astrophysics efforts. At the end of the period covered by this report, Alcock left for the University of Pennsylvania. Cook became Acting Director of IGPP, the Physics Direcorate merged with portions of the old Lasers Direcorate to become Physics and Advanced Technologies. Energy Programs and Earth and Environmental Sciences Directorate became Energy and Environment Sciences Directorate. The IGPP branch at LLNL (as well as the branch at Los Alamos) also facilitates scientific collaborations between researchers at the UC campuses and those at the national laboratories in areas related to earth science, planetary science, and astrophysics. It does this by sponsoring the University Collaborative Research Program (UCRP), which provides funds to UC campus scientists for joint research projects with LLNL. Additional information regarding IGPP-LLNL projects and people may be found at http://wwwigpp. llnl.gov/. The goals of the UCRP are to enrich research opportunities for UC campus scientists by making available to them some of LLNL's unique facilities and expertise, and to broaden the scientific program at LLNL through collaborative or interdisciplinary work with UC campus researchers. UCRP funds (provided jointly by the Regents of the University of California and by the Director of LLNL) are awarded annually on the basis of brief proposals, which are reviewed by a committee of scientists from UC campuses, LLNL programs, and external universities and research organizations. Typical annual funding for a collaborative research project ranges from $5,000 to $30,000. Funds are used for a variety of purposes, such as salary support for UC graduate students, postdoctoral fellows; and costs for experimental facilities. A statistical overview of IGPP-LLNL's UCRP (colloquially known as the mini-grant program) is presented in Figures 1 and 2. Figure 1 shows the distribution of UCRP awards among the UC campuses, by total amount awarded and by number of proposals funded. Figure 2 shows the distribution of awards by center. Although the permanent LLNL staff assigned to IGPP is relatively small (presently about 8 full-time equivalents), IGPP's research centers have become vital research organizations. This growth has been possible because of IGPP support for a substantial group of resident postdoctoral fellows; because of the 20 or more UCRP projects funded each year; and because IGPP hosts a variety of visitors, guests, and faculty members (from both UC and other institutions). To focus attention on areas of topical interest in the geosciences and astrophysics, IGPP--LLNL hosts conferences and workshops and also organizes seminars in astrophysics and geosciences.« less

  7. Advanced Design Concepts for Dense Plasma Focus Devices at LLNL

    NASA Astrophysics Data System (ADS)

    Povilus, Alexander; Podpaly, Yuri; Cooper, Christopher; Shaw, Brian; Chapman, Steve; Mitrani, James; Anderson, Michael; Pearson, Aric; Anaya, Enrique; Koh, Ed; Falabella, Steve; Link, Tony; Schmidt, Andrea

    2017-10-01

    The dense plasma focus (DPF) is a z-pinch device where a plasma sheath is accelerated down a coaxial railgun and ends in a radial implosion, pinch phase. During the pinch phase, the plasma generates intense, transient electric fields through physical mechanisms, similar to beam instabilities, that can accelerate ions in the plasma sheath to MeV-scale energies on millimeter length scales. Using kinetic modeling techniques developed at LLNL, we have gained insight into the formation of these accelerating fields and are using these observations to optimize the behavior of the generated ion beam for producing neutrons via beam-target interactions for kilojoule to megajoule-scale devices. Using a set of DPF's, both in operation and in development at LLNL, we have explored critical aspects of these devices, including plasma sheath formation behavior, power delivery to the plasma, and instability seeding during the implosion in order to improve the absolute yield and stability of the device. Prepared by LLNL under Contract DE-AC52-07NA27344. Computing support for this work came from the LLNL Institutional Computing Grand Challenge program.

  8. Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base

    NASA Astrophysics Data System (ADS)

    Savage, B.; Snoke, J. A.

    2017-12-01

    The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years, SAC files contained a fixed-length header. Time and distance-related values are stored in single precision, which has become a problem with the increase in desired precision for data compared to thirty years ago. A future goal is to address this precision problem, but in a backward compatible manner. We would also like to transition SAC to a more open source license.

  9. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Dunlap, C; Garlick, J

    2002-07-08

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.

  10. Bi-Directional Theta Modulation between the Septo-Hippocampal System and the Mammillary Area in Free-Moving Rats

    PubMed Central

    Ruan, Ming; Young, Calvin K.; McNaughton, Neil

    2017-01-01

    Hippocampal (HPC) theta oscillations have long been linked to various functions of the brain. Many cortical and subcortical areas that also exhibit theta oscillations have been linked to functional circuits with the hippocampus on the basis of coupled activities at theta frequencies. We examine, in freely moving rats, the characteristics of diencephalic theta local field potentials (LFPs) recorded in the supramammillary/mammillary (SuM/MM) areas that are bi-directionally connected to the HPC through the septal complex. Using partial directed coherence (PDC), we find support for previous suggestions that SuM modulates HPC theta at higher frequencies. We find weak separation of SuM and MM by dominant theta frequency recorded locally. Contrary to oscillatory cell activities under anesthesia where SuM is insensitive, but MM is sensitive to medial septal (MS) inactivation, theta LFPs persisted and became indistinguishable after MS-inactivation. However, MS-inactivation attenuated SuM/MM theta power, while increasing the frequency of SuM/MM theta. MS-inactivation also reduced root mean squared power in both HPC and SuM/MM equally, but reduced theta power differentially in the time domain. We provide converging evidence that SuM is preferentially involved in coding HPC theta at higher frequencies, and that the MS-HPC circuit normally imposes a frequency-limiting modulation over the SuM/MM area as suggested by cell-based recordings in anesthetized animals. In addition, we provide evidence that the postulated SuM-MS-HPC-MM circuit is under complex bi-directional control, rather than SuM and MM having roles as unidirectional relays in the network. PMID:28955209

  11. Beta-blockade prevents hematopoietic progenitor cell suppression after hemorrhagic shock.

    PubMed

    Elhassan, Ihab O; Hannoush, Edward J; Sifri, Ziad C; Jones, Eyone; Alzate, Walter D; Rameshwar, Pranela; Livingston, David H; Mohr, Alicia M

    2011-08-01

    Severe injury is accompanied by sympathetic stimulation that induces bone marrow (BM) dysfunction by both suppression of hematopoietic progenitor cell (HPC) growth and loss of cells via HPC mobilization to the peripheral circulation and sites of injury. Previous work demonstrated that beta-blockade (BB) given prior to tissue injury both reduces HPC mobilization and restores HPC colony growth within the BM. This study examined the effect and timing of BB on BM function in a hemorrhagic shock (HS) model. Male Sprague-Dawley rats underwent HS via blood withdrawal, maintaining the mean arterial blood pressure at 30-40 mm Hg for 45 min, after which the extracted blood was reinfused. Propranolol (10 mg/kg) was given either prior to or immediately after HS. Blood pressure, heart rate, BM cellularity, and death were recorded. Bone marrow HPC growth was assessed by counting colony-forming unit-granulocyte-, erythrocyte-, monocyte-, megakaryocyte (CFU-GEMM), burst-forming unit-erythroid (BFU-E), and colony-forming unit-erythroid (CFU-E) cells. Administration of BB prior to injury restored HPC growth to that of naïve animals (CFU-GEMM 59 ± 11 vs. 61 ± 4, BFU-E 68 ± 9 vs. 73 ± 3, and CFU-E 81 ± 35 vs. 78 ± 14 colonies/plate). Beta-blockade given after HS increased the growth of CFU-GEMM, BFU-E, and CFU-E significantly and improved BM cellularity compared with HS alone. The mortality rate was not increased in the groups receiving BB. Administration of propranolol either prior to injury or immediately after resuscitation significantly reduced post-shock BM suppression. After HS, BB may improve BM cellularity by decreasing HPC mobilization. Therefore, the early use of BB post-injury may play an important role in attenuating the BM dysfunction accompanying HS.

  12. Identification and characterization of haemagglutinin epitopes of Avibacterium paragallinarum serovar C.

    PubMed

    Noro, Taichi; Oishi, Eiji; Kaneshige, Takahiro; Yaguchi, Kazuhiko; Amimoto, Katsuhiko; Shimizu, Mitsugu

    2008-10-15

    The objectives of this study were to identify haemagglutinin (HA) epitopes of Avibacterium paragallinarum serovar C that are capable of eliciting haemagglutination inhibition (HI) antibody, and to investigate their immunogenic role. Three conformational epitopes were detected on HA by blocking ELISA and immuno-dot blot analysis using a panel of five monoclonal antibodies (MAbs) with HI activity, designated 8C1C, 4G8B, 24E4D, 11E11B, and 10D1A. The minimum DNA regions coding these three epitopes were 3195, 2862, and 807bp in size, and mapped within a gene with 6117bp. Nine DNA fragments of various lengths were prepared, and their recombinant proteins were generated in E. coli. One recombinant protein, designated HPC5.5, was recognized by MAb 8C1C, and had strong ability to adsorb HI antibody to Av. paragallinarum serovar C. Other recombinant proteins designated HPC5.1, HPC4.8, and HPC2.5 did not react with MAb 8C1C and only slightly adsorbed HI antibody. All chickens immunized once with HPC5.5 did not show any typical clinical signs such as nasal discharge or facial edema against challenge inoculation with Av. paragallinarum serovar C. However, HPC5.1, which was recognized by four MAbs (not including MAb 8C1C), showed only partial protective immunity in five of eight immunized chickens. The results suggest that the HA epitope recognized by MAb 8C1C is the major epitope responsible for eliciting HI antibody, and HPC5.5 is a practical candidate protein to develop a new vaccine against avian infectious coryza caused by Av. paragallinarum serovar C.

  13. Genetic heterogeneity in Finnish hereditary prostate cancer using ordered subset analysis

    PubMed Central

    Simpson, Claire L; Cropp, Cheryl D; Wahlfors, Tiina; George, Asha; Jones, MaryPat S; Harper, Ursula; Ponciano-Jackson, Damaris; Tammela, Teuvo; Schleutker, Johanna; Bailey-Wilson, Joan E

    2013-01-01

    Prostate cancer (PrCa) is the most common male cancer in developed countries and the second most common cause of cancer death after lung cancer. We recently reported a genome-wide linkage scan in 69 Finnish hereditary PrCa (HPC) families, which replicated the HPC9 locus on 17q21-q22 and identified a locus on 2q37. The aim of this study was to identify and to detect other loci linked to HPC. Here we used ordered subset analysis (OSA), conditioned on nonparametric linkage to these loci to detect other loci linked to HPC in subsets of families, but not the overall sample. We analyzed the families based on their evidence for linkage to chromosome 2, chromosome 17 and a maximum score using the strongest evidence of linkage from either of the two loci. Significant linkage to a 5-cM linkage interval with a peak OSA nonparametric allele-sharing LOD score of 4.876 on Xq26.3-q27 (ΔLOD=3.193, empirical P=0.009) was observed in a subset of 41 families weakly linked to 2q37, overlapping the HPCX1 locus. Two peaks that were novel to the analysis combining linkage evidence from both primary loci were identified; 18q12.1-q12.2 (OSA LOD=2.541, ΔLOD=1.651, P=0.03) and 22q11.1-q11.21 (OSA LOD=2.395, ΔLOD=2.36, P=0.006), which is close to HPC6. Using OSA allows us to find additional loci linked to HPC in subsets of families, and underlines the complex genetic heterogeneity of HPC even in highly aggregated families. PMID:22948022

  14. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  15. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less

  16. Halobacterium piscisalsi sp. nov., from fermented fish (pla-ra) in Thailand.

    PubMed

    Yachai, Mongkol; Tanasupawat, Somboon; Itoh, Takashi; Benjakul, Soottawat; Visessanguan, Wonnop; Valyasevi, Ruud

    2008-09-01

    A Gram-negative, motile, rod-shaped, extremely halophilic archaeon, designated strain HPC1-2(T), was isolated from pla-ra, a salt-fermented fish product of Thailand. Strain HPC1-2(T) was able to grow at 20-60 degrees C (optimum at 37-40 degrees C), at 2.6-5.1 M NaCl (optimum at 3.4-4.3 M NaCl) and at pH 5.0-8.0 (optimum at pH 7.0-7.5). Hypotonic treatment with less than 1.7 M NaCl caused cell lysis. The major polar lipids of the isolate were C(20)C(20) derivatives of phosphatidylglycerol, methylated phosphatidylglycerol phosphate, phosphatidylglycerol sulfate, triglycosyl diether, sulfated triglycosyl diether and sulfated tetraglycosyl diether. The G+C content of the DNA was 65.5 mol%. 16S rRNA gene sequence analysis indicated that the isolate represented a member of the genus Halobacterium in the family Halobacteriaceae. Based on 16S rRNA gene sequence similarity, strain HPC1-2(T) was related most closely to Halobacterium salinarum DSM 3754(T) (99.2%) and Halobacterium jilantaiense JCM 13558(T) (97.8%). However, low levels of DNA-DNA relatedness suggested that strain HPC1-2(T) was genotypically different from these closely related type strains. Strain HPC1-2(T) could also be differentiated based on physiological and biochemical characteristics. Therefore, strain HPC1-2(T) is considered to represent a novel species of the genus Halobacterium, for which the name Halobacterium piscisalsi sp. nov. is proposed. The type strain is HPC1-2(T) (=BCC 24372(T)=JCM 14661(T)=PCU 302(T)).

  17. A novel hematopoietic progenitor cell mobilization regimen, utilizing bortezomib and filgrastim, for patients undergoing autologous transplant.

    PubMed

    Abhyankar, Sunil; Lubanski, Philip; DeJarnette, Shaun; Merkel, Dean; Bunch, Jennifer; Daniels, Kelly; Aljitawi, Omar; Lin, Tara; Ganguly, Sid; McGuirk, Joseph

    2016-12-01

    Adequate hematopoietic progenitor cell (HPC) collection is critical for patients undergoing autologous HPC transplant (AHPCT). Historically, 15 - 30% of patients failed HPC mobilization with granulocyte-colony stimulating factor (G-CSF) alone. Bortezomib, a proteasome inhibitor, has been shown to down regulate very late antigen-4 (VLA-4), an adhesion molecule expressed on HPCs. In this pilot study, bortezomib was administered on days -11 and -8 at a dose of 1.3 mg/m 2 intravenously (IV) or subcutaneously (SQ), followed by G-CSF 10 mcg/kg SQ, on days -4 to -1 prior to HPC collection (Day 1). Nineteen patients, with multiple myeloma (n = 12) or non-Hodgkin lymphoma (n = 7) undergoing AHPCT for the first time, were enrolled. Patients were excluded if they had worse than grade II neuropathy or platelet count less than 100 x 10 9 /L. Bortezomib was well tolerated and all patients had adequate HPC collections with no mobilization failures. One patient (6%) had a CD34 + cell count of 3.9 cells/µL on Day 1 and received plerixafor per institutional algorithm. Eleven patients completed HPC collection in 1 day and eight in 2 days. All patients underwent AHPCT and had timely neutrophil and platelet engraftment. Comparison with a historical control group of 70 MM and lymphoma patients, who were mobilized with G-CSF, showed significantly higher CD 34+ cells/kg collected in the bortezomib mobilization study group. Bortezomib plus G-CSF is an effective HPC mobilizing regimen worth investigating further in subsequent studies. J. Clin. Apheresis 31:559-563, 2016. © 2015 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Characterization of CD34+ hematopoietic cells in systemic mastocytosis: Potential role in disease dissemination.

    PubMed

    Mayado, A; Teodosio, C; Dasilva-Freire, N; Jara-Acevedo, M; Garcia-Montero, A C; Álvarez-Twose, I; Sánchez-Muñoz, L; Matito, A; Caldas, C; Muñoz-González, J I; Henriques, A; Sánchez-Gallego, J I; Escribano, L; Orfao, A

    2018-01-13

    Recent studies show that most systemic mastocytosis (SM) patients, including indolent SM (ISM) with (ISMs+) and without skin lesions (ISMs-), carry the KIT D816V mutation in PB leukocytes. We investigated the potential association between the degree of involvement of BM hematopoiesis by the KIT D816V mutation and the distribution of different maturation-associated compartments of bone marrow (BM) and peripheral blood (PB) CD34 + hematopoietic precursors (HPC) in ISM and identified the specific PB cell compartments that carry this mutation. The distribution of different maturation-associated subsets of BM and PB CD34 + HPC from 64 newly diagnosed (KIT-mutated) ISM patients and 14 healthy controls was analyzed by flow cytometry. In 18 patients, distinct FACS-purified PB cell compartments were also investigated for the KIT mutation. ISM patients showed higher percentages of both BM and PB MC-committed CD34 + HPC vs controls, particularly among ISM cases with MC-restricted KIT mutation (ISM MC ); this was associated with progressive blockade of maturation of CD34 + HPC to the neutrophil lineage from ISM MC to multilineage KIT-mutated cases (ISM ML ). Regarding the frequency of KIT-mutated cases and cell populations in PB, variable patterns were observed, the percentage of KIT-mutated PB CD34 + HPC, eosinophils, neutrophils, monocytes and T cells increasing from ISMs- MC and ISMs+ MC to ISM ML patients. The presence of the KIT D816V mutation in PB of ISM patients is associated with (early) involvement of circulating CD34 + HPC and multiple myeloid cell subpopulations, KIT-mutated PB CD34 + HPC potentially contributing to early dissemination of the disease. © 2018 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  19. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  20. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE PAGES

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...

    2016-11-01

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  1. Effect of hydroxypropylcellulose and Tween 80 on physicochemical properties and bioavailability of ezetimibe-loaded solid dispersion.

    PubMed

    Rashid, Rehmana; Kim, Dong Wuk; Din, Fakhar Ud; Mustapha, Omer; Yousaf, Abid Mehmood; Park, Jong Hyuck; Kim, Jong Oh; Yong, Chul Soon; Choi, Han-Gon

    2015-10-05

    The purpose of this research was to evaluate the effect of the HPC (hydroxypropylcellulose) and Tween 80 on the physicochemical properties and oral bioavailability of ezetimibe-loaded solid dispersions. The binary solid dispersions were prepared with drug and various amounts of HPC. Likewise, ternary solid dispersions were prepared with different ratios of drug, HPC and Tween 80. Both types of solid dispersions were prepared using the solvent evaporation method. Their aqueous solubility, physicochemical properties, dissolution and oral bioavailability were investigated in comparison with the drug powder. All the solid dispersions significantly improved the drug solubility and dissolution. As the amount of HPC increased in the binary solid dispersions to 10-fold, the drug solubility and dissolution were increased accordingly. However, further increase in HPC did not result in significant differences among them. Similarly, up to 0.1-fold, Tween 80 increased the drug solubility in the ternary solid dispersions followed by no significant change. However, Tween 80 hardly affected the drug dissolution. The physicochemical analysis proved that the drug in binary and ternary solid dispersion was existed in the amorphous form. The particle-size measurements of these formulations were also not significantly different from each other, which showed that Tween 80 had no impact on physicochemical properties. The ezetimibe-loaded binary and ternary solid dispersions gave 1.6- and 1.8-fold increased oral bioavailability in rats, respectively, as compared to the drug powder; however, these values were not significantly different from each other. Thus, HPC greatly affected the solubility, dissolution and oral bioavailability of drug, but Tween 80 hardly did. Furthermore, this ezetimibe-loaded binary solid dispersion prepared only with HPC would be suggested as a potential formulation for oral administration of ezetimibe. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Quality investigation of hydroxyprogesterone caproate active pharmaceutical ingredient and injection

    PubMed Central

    Chollet, John L.; Jozwiakowski, Michael J.

    2012-01-01

    The purpose of this study was to investigate the quality of hydroxyprogesterone caproate (HPC) active pharmaceutical ingredient (API) sources that may be used by compounding pharmacies, compared to the FDA-approved source of the API; and to investigate the quality of HPC injection samples obtained from compounding pharmacies in the US, compared to the FDA-approved product (Makena®). Samples of API were obtained from every source confirmed to be an original manufacturer of the drug for human use, which were all companies in China that were not registered with FDA. Eight of the ten API samples (80%) did not meet the impurity specifications required by FDA for the API used in the approved product. One API sample was found to not be HPC at all; additional laboratory testing showed that it was glucose. Thirty samples of HPC injection obtained from com pounding pharmacies throughout the US were also tested, and eight of these samples (27%) failed to meet the potency requirement listed in the USP monograph for HPC injection and/or the HPLC assay. Sixteen of the thirty injection samples (53%) exceeded the impurity limit setforthe FDA-approved drug product. These results confirm the inconsistency of compounded HPC Injections and suggest that the risk-benefit ratio of using an unapproved compounded preparation, when an FDA-approved drug product is available, is not favorable. PMID:22329865

  3. Influence of different types of low substituted hydroxypropyl cellulose on tableting, disintegration, and floating behaviour of floating drug delivery systems

    PubMed Central

    Diós, Péter; Pernecker, Tivadar; Nagy, Sándor; Pál, Szilárd; Dévay, Attila

    2014-01-01

    The object of the present study is to evaluate the effect of application of low-substituted hydroxypropyl cellulose (L-HPC) 11 and B1 as excipients promoting floating in gastroretentive tablets. Directly compressed tablets were formed based on experimental design. Face-centred central composite design was applied with two factors and 3 levels, where amount of sodium alginate (X1) and L-HPC (X2) were the numerical factors. Applied types of L-HPCs and their 1:1 mixture were included in a categorical factor (X3). Studied parameters were floating lag time, floating time, floating force, swelling behaviour of tablets and dissolution of paracetamol, which was used as a model active substance. Due to their physical character, L-HPCs had different water uptake and flowability. Lower flowability and lower water uptake was observed after 60 min at L-HPC 11 compared to L-HPC B1. Shorter floating times were detected at L-HPC 11 and L-HPC mixtures with 0.5% content of sodium alginate, whereas alginate was the only significant factor. Evaluating results of drug release and swelling studies on floating tablets revealed correlation, which can serve to help to understand the mechanism of action of L-HPCs in the field development of gastroretentive dosage forms. PMID:26702261

  4. FY 2008 Next Generation Safeguards Initiative International Safeguards Education and Training Pilot Progerams Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreicer, M; Anzelon, G; Essner, J

    2008-10-17

    Key component of the Next Generation Safeguards Initiative (NGSI) launched by the National Nuclear Security Administration is the development of human capital to meet present and future challenges to the safeguards regime. An effective university-level education in safeguards and related disciplines is an essential element in a layered strategy to rebuild the safeguards human resource capacity. Two pilot programs at university level, involving 44 students, were initiated and implemented in spring-summer 2008 and linked to hands-on internships at LANL or LLNL. During the internships, students worked on specific safeguards-related projects with a designated Laboratory Mentor to provide broader exposure tomore » nuclear materials management and information analytical techniques. The Safeguards and Nuclear Material Management pilot program was a collaboration between the Texas A&M University (TAMU), Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL). It included a 16-lecture course held during a summer internship program. The instructors for the course were from LANL together with TAMU faculty and LLNL experts. The LANL-based course was shared with the students spending their internship at LLNL via video conference. A week-long table-top (or hands-on) exercise on was also conducted at LANL. The student population was a mix of 28 students from a 12 universities participating in a variety of summer internship programs held at LANL and LLNL. A large portion of the students were TAMU students participating in the NGSI pilot. The International Nuclear Safeguards Policy and Information Analysis pilot program was implemented at the Monterey Institute for International Studies (MIIS) in cooperation with LLNL. It included a two-week intensive course consisting of 20 lectures and two exercises. MIIS, LLNL, and speakers from other U.S. national laboratories (LANL, BNL) delivered lectures for the audience of 16 students. The majority of students were senior classmen or new master's degree graduates from MIIS specializing in nonproliferation policy studies. Other university/organizations represented: University of California in LA, Stanford University, and the IAEA. Four of the students that completed this intensive course participated in a 2-month internship at LLNL. The conclusions of the two pilot courses and internships was a NGSI Summer Student Symposium, held at LLNL, where 20 students participated in LLNL facility tours and poster sessions. The Poster sessions were designed to provide a forum for sharing the results of their summer projects and providing experience in presenting their work to a varied audience of students, faculty and laboratory staff. The success of bringing together the students from the technical and policy pilots was notable and will factor into the planning for the continued refinement of their two pilot efforts in the coming years.« less

  5. A case of metastatic haemangiopericytoma to the thyroid gland: Case report and literature review

    PubMed Central

    PROIETTI, AGNESE; SARTORI, CHIARA; TORREGROSSA, LIBORIO; VITTI, PAOLO; AGHABABYAN, ALEKSANDR; FREGOLI, LORENZO; MICCOLI, PAOLO; BASOLO, FULVIO

    2012-01-01

    Haemangiopericytoma (HPC) is a mesenchymal neoplasm accounting for a minority of all vascular tumours. HPC mostly arises in the lower extremities and the retroperitoneum, while the head and neck area is the third most common site. The majority of HPCs are histologically benign. However, a small percentage possess atypical features, such as a high mitotic rate, high cellularity and foci of necrosis. We report a case of classical abdominal HPC that presented 7 years after the first surgical resection with thyroid metastases of malignant HPC. Microscopic examination revealed multiple hypercellular nodules with an infiltrative growth pattern. These nodules consisted of tightly packed fusiform or spindle-shaped cells with nuclear polymorphism and an increased mitotic rate. The tumour cells exhibited a marked expression of CD34. Cells were arranged around a prominent vascular network, occasionally with a ‘staghorn’ configuration. The results of this study support and confirm the theory that HPC is a rare neoplasm with unpredictable behaviour, as largely debated in the international literature. Therefore, this study emphasized the importance of applying strict diagnostic criteria in making the most appropriate diagnosis. PMID:22783428

  6. Effects of electrical stimulation of the rat vestibular labyrinth on c-Fos expression in the hippocampus.

    PubMed

    Hitier, Martin; Sato, Go; Zhang, Yan-Feng; Besnard, Stephane; Smith, Paul F

    2018-06-11

    Several studies have demonstrated that electrical activation of the peripheral vestibular system can evoke field potential, multi-unit neuronal activity and acetylcholine release in the hippocampus (HPC). However, no study to date has employed the immediate early gene protein, c-Fos, to investigate the distribution of activation of cells in the HPC following electrical stimulation of the vestibular system. We found that vestibular stimulation increased the number of animals expressing c-Fos in the dorsal HPC compared to sham control rats (P ≤ 0.02), but not in the ventral HPC. c-Fos was also expressed in an increased number of animals in the dorsal dentate gyrus (DG) compared to sham control rats (P ≤ 0.0001), and to a lesser extent in the ventral DG (P ≤ 0.006). The results of this study show that activation of the vestibular system results in a differential increase in the expression of c-Fos across different regions of the HPC. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Corporate Functional Management Evaluation of the LLNL Radiation Safety Organization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sygitowicz, L S

    2008-03-20

    A Corporate Assess, Improve, and Modernize review was conducted at Lawrence Livermore National Laboratory (LLNL) to evaluate the LLNL Radiation Safety Program and recommend actions to address the conditions identified in the Internal Assessment conducted July 23-25, 2007. This review confirms the findings of the Internal Assessment of the Institutional Radiation Safety Program (RSP) including the noted deficiencies and vulnerabilities to be valid. The actions recommended are a result of interviews with about 35 individuals representing senior management through the technician level. The deficiencies identified in the LLNL Internal Assessment of the Institutional Radiation Safety Program were discussed with Radiationmore » Safety personnel team leads, customers of Radiation Safety Program, DOE Livermore site office, and senior ES&H management. There are significant issues with the RSP. LLNL RSP is not an integrated, cohesive, consistently implemented program with a single authority that has the clear roll and responsibility and authority to assure radiological operations at LLNL are conducted in a safe and compliant manner. There is no institutional commitment to address the deficiencies that are identified in the internal assessment. Some of these deficiencies have been previously identified and corrective actions have not been taken or are ineffective in addressing the issues. Serious funding and staffing issues have prevented addressing previously identified issues in the Radiation Calibration Laboratory, Internal Dosimetry, Bioassay Laboratory, and the Whole Body Counter. There is a lack of technical basis documentation for the Radiation Calibration Laboratory and an inadequate QA plan that does not specify standards of work. The Radiation Safety Program lack rigor and consistency across all supported programs. The implementation of DOE Standard 1098-99 Radiological Control can be used as a tool to establish this consistency across LLNL. The establishment of a site wide ALARA Committee and administrative control levels would focus attention on improved processes. Currently LLNL issues dosimeters to a large number of employees and visitors that do not enter areas requiring dosimetry. This includes 25,000 visitor TLDs per year. Dosimeters should be issued to only those personnel who enter areas where dosimetry is required.« less

  8. FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi.

    PubMed

    Sapes, Jordi; Solsona, Francesc

    2016-02-06

    Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux.

  9. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less

  10. WinHPC System Software | High-Performance Computing | NREL

    Science.gov Websites

    Software WinHPC System Software Learn about the software applications, tools, toolchains, and for industrial applications. Intel Compilers Development Tool, Toolchain Suite featuring an industry

  11. A Long History of Supercomputing

    ScienceCinema

    Grider, Gary

    2018-06-13

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  12. Liver metastasis of meningeal hemangiopericytoma: a study of 5 cases

    PubMed Central

    Lo, Regina C.; Suriawinata, Arief A.; Rubin, Brian P.

    2016-01-01

    Mesenchymal tumors in the liver, whether primary or metastatic, are rare. Meningeal hemangiopericytoma (HPC) is characteristically associated with delayed metastasis and the liver is one of the most common sites. Despite its consistent histological features, a pathological diagnosis of HPC in the liver is sometimes not straightforward due to its rarity and usually remote medical history of the primary meningeal tumor. In this report, the clinicopathological features of 5 cases of metastatic HPC to the liver were reviewed and described. PMID:27044772

  13. Roy Fraley | NREL

    Science.gov Websites

    Roy Fraley Roy Fraley Professional II-Engineer Roy.Fraley@nrel.gov | 303-384-6468 Roy Fraley is the high-performance computing (HPC) data center engineer with the Computational Science Center's HPC

  14. Do all β-blockers attenuate the excess hematopoietic progenitor cell mobilization from the bone marrow following trauma/hemorrhagic shock?

    PubMed

    Pasupuleti, Latha V; Cook, Kristin M; Sifri, Ziad C; Alzate, Walter D; Livingston, David H; Mohr, Alicia M

    2014-04-01

    Severe injury results in increased mobilization of hematopoietic progenitor cells (HPC) from the bone marrow (BM) to sites of injury, which may contribute to persistent BM dysfunction after trauma. Norepinephrine is a known inducer of HPC mobilization, and nonselective β-blockade with propranolol has been shown to decrease mobilization after trauma and hemorrhagic shock (HS). This study will determine the role of selective β-adrenergic receptor blockade in HPC mobilization in a combined model of lung contusion (LC) and HS. Male Sprague-Dawley rats were subjected to LC, followed by 45 minutes of HS. Animals were then randomized to receive atenolol (LCHS + β1B), butoxamine (LCHS + β2B), or SR59230A (LCHS + β3B) immediately after resuscitation and daily for 6 days. Control groups were composed of naive animals. BM cellularity, %HPCs in peripheral blood, and plasma granulocyte-colony stimulating factor levels were assessed at 3 hours and 7 days. Systemic plasma-mediated effects were evaluated in vitro by assessment of BM HPC growth. Injured lung tissue was graded histologically by a blinded reader. The use of β2B or β3B following LCHS restored BM cellularity and significantly decreased HPC mobilization. In contrast, β1B had no effect on HPC mobilization. Only β3B significantly reduced plasma G-CSF levels. When evaluating the plasma systemic effects, both β2B and β3B significantly improved BM HPC growth as compared with LCHS alone. The use of β2 and β3 blockade did not affect lung injury scores. Both β2 and β3 blockade can prevent excess HPC mobilization and BM dysfunction when given after trauma and HS, and the effects seem to be mediated systemically, without adverse effects on subsequent healing. Only treatment with β3 blockade reduced plasma G-CSF levels, suggesting different mechanisms for adrenergic-induced G-CSF release and mobilization of HPCs. This study adds to the evidence that therapeutic strategies that reduce the exaggerated sympathetic stimulation after severe injury are beneficial and reduce BM dysfunction.

  15. Summary of Environmental Data Analysis and Work Performed by Lawrence Livermore National Laboratory (LLNL) in Support of the Navajo Nation Abandoned Mine Lands Project at Tse Tah, Arizona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taffet, Michael J.; Esser, Bradley K.; Madrid, Victor M.

    This report summarizes work performed by Lawrence Livermore National Laboratory (LLNL) under Navajo Nation Services Contract CO9729 in support of the Navajo Abandoned Mine Lands Reclamation Program (NAMLRP). Due to restrictions on access to uranium mine waste sites at Tse Tah, Arizona that developed during the term of the contract, not all of the work scope could be performed. LLNL was able to interpret environmental monitoring data provided by NAMLRP. Summaries of these data evaluation activities are provided in this report. Additionally, during the contract period, LLNL provided technical guidance, instructional meetings, and review of relevant work performed by NAMLRPmore » and its contractors that was not contained in the contract work scope.« less

  16. Special Analysis for the Disposal of the Lawrence Livermore National Laboratory EnergyX Macroencapsulated Waste Stream at the Area 5 Radioactive Waste Management Site, Nevada National Security Site, Nye County, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shott, Gregory J.

    This special analysis (SA) evaluates whether the Lawrence Livermore National Laboratory (LLNL) EnergyX Macroencapsulated waste stream (B LAMACRONCAP, Revision 1) is suitable for disposal by shallow land burial (SLB) at the Area 5 Radioactive Waste Management Site (RWMS) at the Nevada National Security Site (NNSS). The LLNL EnergyX Macroencapsulated waste stream is macroencapsulated mixed waste generated during research laboratory operations and maintenance (LLNL 2015). The LLNL EnergyX Macroencapsulated waste stream required a special analysis due to tritium (3H), cobalt-60 (60Co), cesium-137 (137Cs), and radium-226 (226Ra) exceeding the NNSS Waste Acceptance Criteria (WAC) Action Levels (U.S. Department of Energy, National Nuclearmore » Security Administration Nevada Field Office [NNSA/NFO] 2015).The results indicate that all performance objectives can be met with disposal of the waste stream in a SLB trench. Addition of the LLNL EnergyX Macroencapsulated inventory slightly increases multiple performance assessment results, with the largest relative increase occurring for the all-pathways annual total effective dose (TED). The maximum mean and 95th percentile 222Rn flux density remain less than the performance objective throughout the compliance period. The LLNL EnergyX Macroencapsulated waste stream is suitable for disposal by SLB at the Area 5 RWMS. The waste stream is recommended for approval without conditions.« less

  17. Source Code Analysis Laboratory (SCALe)

    DTIC Science & Technology

    2012-04-01

    Versus Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8...is inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with software for a particular...servers support a collection of virtual machines (VMs) that can be configured to support analysis in various environments, such as Windows XP and Linux . A

  18. FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi

    PubMed Central

    Sapes, Jordi; Solsona, Francesc

    2016-01-01

    Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux. PMID:26861340

  19. 2020 Foresight Forging the Future of Lawrence Livermore National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrzanowski, P.

    2000-01-01

    The Lawrence Livermore National Laboratory (LLNL) of 2020 will look much different from the LLNL of today and vastly different from how it looked twenty years ago. We, the members of the Long-Range Strategy Project, envision a Laboratory not defined by one program--nuclear weapons research--but by several core programs related to or synergistic with LLNL's national security mission. We expect the Laboratory to be fully engaged with sponsors and the local community and closely partnering with other research and development (R&D) organizations and academia. Unclassified work will be a vital part of the Laboratory of 2020 and will visibly demonstratemore » LLNL's international science and technology strengths. We firmly believe that there will be a critical and continuing role for the Laboratory. As a dynamic and versatile multipurpose laboratory with a national security focus, LLNL will be applying its capabilities in science and technology to meet the needs of the nation in the 21st century. With strategic investments in science, outstanding technical capabilities, and effective relationships, the Laboratory will, we believe, continue to play a key role in securing the nation's future.« less

  20. Self curing admixture performance report.

    DOT National Transportation Integrated Search

    2012-02-01

    The Oregon Department of Transportation (ODOT) has experienced early age cracking of newly placed high performance : concrete (HPC) bridge decks. The silica fume contained in the HPC requires immediate and proper curing application after : placement ...

  1. Increased leptin by hypoxic-preconditioning promotes autophagy of mesenchymal stem cells and protects them from apoptosis.

    PubMed

    Wang, LiHan; Hu, XinYang; Zhu, Wei; Jiang, Zhi; Zhou, Yu; Chen, PanPan; Wang, JianAn

    2014-02-01

    Autophagy is the basic catabolic progress involved in cell degradation of unnecessary or dysfunctional cellular components. It has been proven that autophagy could be utilized for cell survival under stresses. Hypoxic-preconditioning (HPC) could reduce apoptosis induced by ischemia and hypoxia/serum deprivation (H/SD) in bone marrow-derived mesenchymal stem cells (BMSCs). Previous studies have shown that both leptin signaling and autophagy activation were involved in the protection against apoptosis induced by various stress, including ischemia-reperfusion. However, it has never been fully understood how leptin was involved in the protective effects conferred by autophagy. In the present study, we demonstrated that HPC can induce autophagy in BMSCs by increased LC3-II/LC3-I ratio and autophagosome formation. Interestingly, similar effects were also observed when BMSCs were pretreated with rapamycin. The beneficial effects offered by HPC were absent when BMSCs were incubated with autophagy inhibitor, 3-methyladenine (3-MA). In addition, down-regulated leptin expression by leptin-shRNA also attenuated HPC-induced autophagy in BMSCs, which in turn was associated with increased apoptosis after exposed to sustained H/SD. Furthermore, increased AMP-activated protein kinase phosphorylation and decreased mammalian target of rapamycin phosphorylation that were observed in HPC-treated BMSCs can also be attenuated by down-regulation of leptin expression. Our data suggests that leptin has impact on HPC-induced autophagy in BMSCs which confers protection against apoptosis under H/SD, possibly through modulating both AMPK and mTOR pathway.

  2. Hierarchically porous carbon with manganese oxides as highly efficient electrode for asymmetric supercapacitors.

    PubMed

    Chou, Tsu-Chin; Doong, Ruey-An; Hu, Chi-Chang; Zhang, Bingsen; Su, Dang Sheng

    2014-03-01

    A promising energy storage material, MnO2 /hierarchically porous carbon (HPC) nanocomposites, with exceptional electrochemical performance and ultrahigh energy density was developed for asymmetric supercapacitor applications. The microstructures of MnO2 /HPC nanocomposites were characterized by transmission electron microscopy, scanning transmission electron microscopy, and electron dispersive X-ray elemental mapping analysis. The 3-5 nm MnO2 nanocrystals at mass loadings of 7.3-10.8 wt % are homogeneously distributed onto the HPCs, and the utilization efficiency of MnO2 on specific capacitance can be enhanced to 94-96 %. By combining the ultrahigh utilization efficiency of MnO2 and the conductive and ion-transport advantages of HPCs, MnO2 /HPC electrodes can achieve higher specific capacitance values (196 F g(-1) ) than those of pure carbon electrodes (60.8 F g(-1) ), and maintain their superior rate capability in neutral electrolyte solutions. The asymmetric supercapacitor consisting of a MnO2 /HPC cathode and a HPC anode shows an excellent performance with energy and power densities of 15.3 Wh kg(-1) and 19.8 kW kg(-1) , respectively, at a cell voltage of 2 V. Results obtained herein demonstrate the excellence of MnO2 /HPC nanocomposites as energy storage material and open an avenue to fabricate the next generation supercapacitors with both high power and energy densities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Control System for the LLNL Kicker Pulse Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, J A; Anaya, R M; Cook, E G

    2002-06-18

    A solid-state high voltage pulse generator with multi-pulse burst capability, very fast rise and fall times, pulse width agility, and amplitude modulation capability for use with high speed electron beam kickers has been designed and tested at LLNL. A control system calculates a desired waveform to be applied to the kicker based on measured electron beam displacement then adjusts the pulse generators to provide the desired waveform. This paper presents the design of the control system and measure performance data from operation on the ETA-11 accelerator at LLNL.

  4. The National Ignition Facility (NIF) and High Energy Density Science Research at LLNL (Briefing Charts)

    DTIC Science & Technology

    2013-06-21

    neutron activation detectors (FNADS) 2013-049951s2.ppt Detector locations • Average rR ~ 1 g/cm2 • ~ 50% variations Motivates new 2D backlit imaging...of the implosion Motivates Compton radiography for stagnated fuel shape g/cm2 DrR rR map from neutron Activation Detectors (90Zr(n,2n)  89Zr...high energy cosmic rays Oxford Univ./LLNL LLNL Novel phases of compressed diamond Synthesis of elements heavier than iron 1545 Neutron flux in

  5. Demonstration of Laser Plasma X-Ray Source with X-Ray Collimator Final Report CRADA No. TC-1564-99

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lane, S. M.; Forber, R. A.

    2017-09-28

    This collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and JMAR Research, Inc. (JRI), was to demonstrate that LLNL x-ray collimators can effectively increase the wafer throughput of JRI's laser based x-ray lithography systems. The technical objectives were expected to be achieved by completion of the following tasks, which are separated into two task lists by funding source. The organization (LLNL or JMAR) having primary responsibility is given parenthetically for each task.

  6. Level-2 Milestone 6007: Sierra Early Delivery System Deployed to Secret Restricted Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertsch, A. D.

    This report documents the delivery and installation of Shark, a CORAL Sierra early delivery system deployed on the LLNL SRD network. Early ASC program users have run codes on the machine in support of application porting for the final Sierra system which will be deployed at LLNL in CY2018. In addition to the SRD resource, Shark, unclassified resources, Rzmanta and Ray, have been deployed on the LLNL Restricted Zone and Collaboration Zone networks in support of application readiness for the Sierra platform.

  7. Institute of Geophysics and Planetary Physics (IGPP), Lawrence Livermore National Laboratory (LLNL): Quinquennial report, November 14-15, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweed, J.

    1996-10-01

    This Quinquennial Review Report of the Lawrence Livermore National Laboratory (LLNL) branch of the Institute for Geophysics and Planetary Physics (IGPP) provides an overview of IGPP-LLNL, its mission, and research highlights of current scientific activities. This report also presents an overview of the University Collaborative Research Program (UCRP), a summary of the UCRP Fiscal Year 1997 proposal process and the project selection list, a funding summary for 1993-1996, seminars presented, and scientific publications. 2 figs., 3 tabs.

  8. Experience-Dependent Induction of Hippocampal ΔFosB Controls Learning.

    PubMed

    Eagle, Andrew L; Gajewski, Paula A; Yang, Miyoung; Kechner, Megan E; Al Masraf, Basma S; Kennedy, Pamela J; Wang, Hongbing; Mazei-Robison, Michelle S; Robison, Alfred J

    2015-10-07

    The hippocampus (HPC) is known to play an important role in learning, a process dependent on synaptic plasticity; however, the molecular mechanisms underlying this are poorly understood. ΔFosB is a transcription factor that is induced throughout the brain by chronic exposure to drugs, stress, and variety of other stimuli and regulates synaptic plasticity and behavior in other brain regions, including the nucleus accumbens. We show here that ΔFosB is also induced in HPC CA1 and DG subfields by spatial learning and novel environmental exposure. The goal of the current study was to examine the role of ΔFosB in hippocampal-dependent learning and memory and the structural plasticity of HPC synapses. Using viral-mediated gene transfer to silence ΔFosB transcriptional activity by expressing ΔJunD (a negative modulator of ΔFosB transcriptional function) or to overexpress ΔFosB, we demonstrate that HPC ΔFosB regulates learning and memory. Specifically, ΔJunD expression in HPC impaired learning and memory on a battery of hippocampal-dependent tasks in mice. Similarly, general ΔFosB overexpression also impaired learning. ΔJunD expression in HPC did not affect anxiety or natural reward, but ΔFosB overexpression induced anxiogenic behaviors, suggesting that ΔFosB may mediate attentional gating in addition to learning. Finally, we found that overexpression of ΔFosB increases immature dendritic spines on CA1 pyramidal cells, whereas ΔJunD reduced the number of immature and mature spine types, indicating that ΔFosB may exert its behavioral effects through modulation of HPC synaptic function. Together, these results suggest collectively that ΔFosB plays a significant role in HPC cellular morphology and HPC-dependent learning and memory. Consolidation of our explicit memories occurs within the hippocampus, and it is in this brain region that the molecular and cellular processes of learning have been most closely studied. We know that connections between hippocampal neurons are formed, eliminated, enhanced, and weakened during learning, and we know that some stages of this process involve alterations in the transcription of specific genes. However, the specific transcription factors involved in this process are not fully understood. Here, we demonstrate that the transcription factor ΔFosB is induced in the hippocampus by learning, regulates the shape of hippocampal synapses, and is required for memory formation, opening up a host of new possibilities for hippocampal transcriptional regulation. Copyright © 2015 the authors 0270-6474/15/3513773-11$15.00/0.

  9. A review of transfusion practice before, during, and after hematopoietic progenitor cell transplantation

    PubMed Central

    Johnson, Viviana V.; Sandler, S. Gerald; Sayegh, Antoine; Klumpp, Thomas R.

    2008-01-01

    The increased use of hematopoietic progenitor cell (HPC) transplantation has implications and consequences for transfusion services: not only in hospitals where HPC transplantations are performed, but also in hospitals that do not perform HPC transplantations but manage patients before or after transplantation. Candidates for HPC transplantation have specific and specialized transfusion requirements before, during, and after transplantation that are necessary to avert the adverse consequences of alloimmunization to human leukocyte antigens, immunohematologic consequences of ABO-mismatched transplantations, or immunosuppression. Decisions concerning blood transfusions during any of these times may compromise the outcome of an otherwise successful transplantation. Years after an HPC transplantation, and even during clinical remission, recipients may continue to be immunosuppressed and may have critically important, special transfusion requirements. Without a thorough understanding of these special requirements, provision of compatible blood components may be delayed and often urgent transfusion needs prohibit appropriate consultation with the patient's transplantation specialist. To optimize the relevance of issues and communication between clinical hematologists, transplantation physicians, and transfusion medicine physicians, the data and opinions presented in this review are organized by sequence of patient presentation, namely, before, during, and after transplantation. PMID:18583566

  10. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  11. Modeling Multi-Bunch X-band Photoinjector Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsh, R A; Anderson, S G; Gibson, D J

    An X-band test station is being developed at LLNL to investigate accelerator optimization for future upgrades to mono-energetic gamma-ray technology at LLNL. The test station will consist of a 5.5 cell X-band rf photoinjector, single accelerator section, and beam diagnostics. Of critical import to the functioning of the LLNL X-band system with multiple electron bunches is the performance of the photoinjector. In depth modeling of the Mark 1 LLNL/SLAC X-band rf photoinjector performance will be presented addressing important challenges that must be addressed in order to fabricate a multi-bunch Mark 2 photoinjector. Emittance performance is evaluated under different nominal electronmore » bunch parameters using electrostatic codes such as PARMELA. Wake potential is analyzed using electromagnetic time domain simulations using the ACE3P code T3P. Plans for multi-bunch experiments and implementation of photoinjector advances for the Mark 2 design will also be discussed.« less

  12. Factors predicting haematopoietic recovery in patients undergoing autologous transplantation: 11-year experience from a single centre.

    PubMed

    Bai, Lijun; Xia, Wei; Wong, Kelly; Reid, Cassandra; Ward, Christopher; Greenwood, Matthew

    2014-10-01

    Engraftment outcomes following autologous transplantation correlate poorly to infused stem cell number. We evaluated 446 consecutive patients who underwent autologous transplantation at our centre between 2001 and 2012. The impact of pre-transplant and collection factors together with CD34(+) dosing ranges on engraftment, hospital length of stay (LOS) and survival endpoints were assessed in order to identify factors which might be optimized to improve outcomes for patients undergoing autologous transplantation using haemopoietic progenitor cells-apheresis (HPC-A). Infused CD34(+) cell dose correlated to platelet but not neutrophil recovery. Time to platelet engraftment was significantly delayed in those receiving low versus medium or high CD34(+) doses. Non-remission status was associated with slower neutrophil and platelet recovery. Increasing neutrophil contamination of HPC-A was strongly associated with slower neutrophil recovery with infused neutrophil dose/kg recipient body weight ≥3 × 10(8)/kg having a significant impact on time to neutrophil engraftment (p = 0.001). Higher neutrophil doses/kg in HPC-A were associated with days of granulocyte colony stimulation factor (G-CSF) use, HPC-A volumes >500 ml and higher NCC in HPC-A. High infused neutrophil dose/kg and age >65 years were associated with longer hospital LOS (p = 0.002 and 0.011 respectively). Only age, disease and disease status predicted disease-free survival (DFS) and overall survival (OS) in our cohort (p < 0.005). Non-relapse mortality was not affected by low dose of CD34(+) (<2 × 10(6)/kg). In conclusion, our study shows that CD34(+) remains a useful and convenient marker for assessing haemotopoietic stem cell content and overall engraftment capacity post-transplant. Neutrophil contamination of HPC-A appears to be a key factor delaying neutrophil recovery. Steps to minimize the degree of neutrophil contamination in HPC-A product may be associated with more rapid neutrophil engraftment and reduced hospital LOS.

  13. Chip-scale integrated optical interconnects: a key enabler for future high-performance computing

    NASA Astrophysics Data System (ADS)

    Haney, Michael; Nair, Rohit; Gu, Tian

    2012-01-01

    High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide a seamless interconnect fabric spanning the intra-

  14. Combining high-resolution gross domestic product data with home and personal care product market research data to generate a subnational emission inventory for Asia.

    PubMed

    Hodges, Juliet Elizabeth Natasha; Vamshi, Raghu; Holmes, Christopher; Rowson, Matthew; Miah, Taqmina; Price, Oliver Richard

    2014-04-01

    Environmental risk assessment of chemicals is reliant on good estimates of product usage information and robust exposure models. Over the past 20 to 30 years, much progress has been made with the development of exposure models that simulate the transport and distribution of chemicals in the environment. However, little progress has been made in our ability to estimate chemical emissions of home and personal care (HPC) products. In this project, we have developed an approach to estimate subnational emission inventory of chemical ingredients used in HPC products for 12 Asian countries including Bangladesh, Cambodia, China, India, Indonesia, Laos, Malaysia, Pakistan, Philippines, Sri Lanka, Thailand, and Vietnam (Asia-12). To develop this inventory, we have coupled a 1 km grid of per capita gross domestic product (GDP) estimates with market research data of HPC product sales. We explore the necessity of accounting for a population's ability to purchase HPC products in determining their subnational distribution in regions where wealth is not uniform. The implications of using high resolution data on inter- and intracountry subnational emission estimates for a range of hypothetical and actual HPC product types were explored. It was demonstrated that for low value products (<500 US$ per capita/annum required to purchase product) the maximum deviation from baseline (emission distributed via population) is less than a factor of 3 and it would not result in significant differences in chemical risk assessments. However, for other product types (>500 US$ per capita/annum required to purchase product) the implications on emissions being assigned to subnational regions can vary by several orders of magnitude. The implications of this on conducting national or regional level risk assessments may be significant. Further work is needed to explore the implications of this variability in HPC emissions to enable the HPC industry and/or governments to advance risk-based chemical management policies in emerging markets. © 2013 SETAC.

  15. Activity of the anterior cingulate cortex and ventral hippocampus underlie increases in contextual fear generalization.

    PubMed

    Cullen, Patrick K; Gilman, T Lee; Winiecki, Patrick; Riccio, David C; Jasnow, Aaron M

    2015-10-01

    Memories for context become less specific with time resulting in animals generalizing fear from training contexts to novel contexts. Though much attention has been given to the neural structures that underlie the long-term consolidation of a context fear memory, very little is known about the mechanisms responsible for the increase in fear generalization that occurs as the memory ages. Here, we examine the neural pattern of activation underlying the expression of a generalized context fear memory in male C57BL/6J mice. Animals were context fear conditioned and tested for fear in either the training context or a novel context at recent and remote time points. Animals were sacrificed and fluorescent in situ hybridization was performed to assay neural activation. Our results demonstrate activity of the prelimbic, infralimbic, and anterior cingulate (ACC) cortices as well as the ventral hippocampus (vHPC) underlie expression of a generalized fear memory. To verify the involvement of the ACC and vHPC in the expression of a generalized fear memory, animals were context fear conditioned and infused with 4% lidocaine into the ACC, dHPC, or vHPC prior to retrieval to temporarily inactivate these structures. The results demonstrate that activity of the ACC and vHPC is required for the expression of a generalized fear memory, as inactivation of these regions returned the memory to a contextually precise form. Current theories of time-dependent generalization of contextual memories do not predict involvement of the vHPC. Our data suggest a novel role of this region in generalized memory, which should be incorporated into current theories of time-dependent memory generalization. We also show that the dorsal hippocampus plays a prolonged role in contextually precise memories. Our findings suggest a possible interaction between the ACC and vHPC controls the expression of fear generalization. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Not ''just'' pump and treat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angleberger, K; Bainer, R W

    2000-12-12

    The Lawrence Livermore National Laboratory (LLNL) has been consistently improving the site cleanup methods by adopting new philosophies, strategies and technologies to address constrained or declining budgets, lack of useable space due to a highly industrialized site, and significant technical challenges. As identified in the ROD, the preferred remedy at the LLNL Livermore Site is pump and treat, although LLNL has improved this strategy to bring the remediation of the ground water to closure as soon as possible. LLNL took the logical progression from a pump and treat system to the philosophy of ''Smart Pump and Treat'' coupled with themore » concepts of ''Hydrostratigraphic Unit Analysis,'' ''Engineered Plume Collapse,'' and ''Phased Source Remediation,'' which led to the development of new, more cost-effective technologies which have accelerated the attainment of cleanup goals significantly. Modeling is also incorporated to constantly develop new, cost-effective methodologies to accelerate cleanup and communicate the progress of cleanup to stakeholders. In addition, LLNL improved on the efficiency and flexibility of ground water treatment facilities. Ground water cleanup has traditionally relied on costly and obtrusive fixed treatment facilities. LLNL has designed and implemented various portable ground water treatment units to replace the fixed facilities; the application of each type of facility is determined by the amount of ground water flow and contaminant concentrations. These treatment units have allowed for aggressive ground water cleanup, increased cleanup flexibility, and reduced capital and electrical costs. After a treatment unit has completed ground water cleanup at one location, it can easily be moved to another location for additional ground water cleanup.« less

  17. Performance testing of HPC on Sunshine Bridge.

    DOT National Transportation Integrated Search

    2009-09-01

    The deck of the Sunshine Bridge overpass, located westbound on Interstate 40 (I-40) near Winslow, Arizona, was : replaced on August 24, 2005. The original deteriorated concrete deck was replaced using high performance : concrete (HPC), reinforced wit...

  18. Recurrent extradural hemangiopericytoma of thoracic spine: a case report.

    PubMed

    Jayashankar, Erukkambattu; Prabhala, Shailaja; Raju, Subodh; Tanikella, Ramamurti

    2014-01-01

    Hemangiopericytoma (HPC) is a rare tumor that arises from pericapillary cells or pericytes of Zimmerman. In the central nervous system, it accounts for less than 1% of tumors, and spinal involvement is very rare. Meningeal hemangiopericytomas show morphological similarities with meningiomas particularly with angiomatous meningioma, where one needs to take the help of immunohistochemistry (IHC) to delineate HPC from meningioma. Here, we report a case of recurrent extradural HPC in a 16 year-old girl, who 5 years back had a pathological diagnosis of angiomatous meningioma, for D5-D6 lesion. On evaluation, magnetic resonance imaging (MRI) showed a large extradural tumor with a significant cord compression involving D5-D6 body, pedicle and ribs. Excision of the lesion and spinal stabilization was performed. The histopathological examination and immunohistochemistry performed on tumor sections revealed features favoring HPC. To conclude, detailed IHC is helpful in avoiding misdiagnosis and in further management of the patient.

  19. Sinonasal haemangiopericytoma: a case report.

    PubMed

    Stomeo, Francesco; Fois, Valeria; Cossu, Antonio; Meloni, Francesco; Pastore, Antonio; Bozzo, Corrado

    2004-11-01

    Haemangiopericytoma (HPC) is a rare vascular tumour that is thought to originate from the vascular pericytes of Zimmerman. HPC may arise in any part of the body, and from 15 to 30% of these tumours are found in the head and neck, with a rare involvement of the sinonasal region The main symptoms of nasal HPC, epistaxis and nasal obstruction, are not typical. The final diagnosis is based on the histopathology and immunochemistry, and whether the tumour is benign or malignant is defined on the basis of the clinical history. HPC located in the sinonasal area is generally benign. We report the case of a young woman with a sinonasal mass histologically proven to be haemangiopericytoma. The patient underwent surgical treatment by means of mid-facial degloving after embolisation of the maxillary artery. After a careful 3-year follow-up, the patient is disease free and healthy.

  20. RAPPORT: running scientific high-performance computing applications on the cloud.

    PubMed

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  1. Hepatic progenitor cells in canine and feline medicine: potential for regenerative strategies

    PubMed Central

    2014-01-01

    New curative therapies for severe liver disease are urgently needed in both the human and veterinary clinic. It is important to find new treatment modalities which aim to compensate for the loss of parenchymal tissue and to repopulate the liver with healthy hepatocytes. A prime focus in regenerative medicine of the liver is the use of adult liver stem cells, or hepatic progenitor cells (HPCs), for functional recovery of liver disease. This review describes recent developments in HPC research in dog and cat and compares these findings to experimental rodent studies and human pathology. Specifically, the role of HPCs in liver regeneration, key components of the HPC niche, and HPC activation in specific types of canine and feline liver disease will be reviewed. Finally, the potential applications of HPCs in regenerative medicine of the liver are discussed and a potential role is suggested for dogs as first target species for HPC-based trials. PMID:24946932

  2. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  3. Effect of cooking methods on selected physicochemical and nutritional properties of barlotto bean, chickpea, faba bean, and white kidney bean.

    PubMed

    Güzel, Demet; Sayar, Sedat

    2012-02-01

    The effects of atmospheric pressure cooking (APC) and high-pressure cooking (HPC) on the physicochemical and nutritional properties of barlotto bean, chickpea, faba bean, and white kidney bean were investigated. The hardness of the legumes cooked by APC or HPC were not statistically different (P > 0.05). APC resulted in higher percentage of seed coat splits than HPC. Both cooking methods decreased Hunter "L" value significantly (P < 0.05). The "a" and "b" values of dark-colored seeds decreased after cooking, while these values tended to increase for the light-colored seeds. The total amounts of solid lost from legume seeds were higher after HPC compared with APC. Rapidly digestible starch (RDS) percentages increased considerably after both cooking methods. High pressure cooked legumes resulted in higher levels of resistant starch (RS) but lower levels of slowly digestible starch (SDS) than the atmospheric pressure cooked legumes.

  4. Addressing Transportation Energy and Environmental Impacts: Technical and Policy Research Directions

    DOT National Transportation Integrated Search

    1995-08-10

    The Lawrence Livermore National Laboratory (LLNL) is establishing a local chapter of the University of California Energy Institute (UCEI). In order to most effectively contribute to the Institute, LLNL sponsored a workshop on energy and environmental...

  5. Environmental Report 1993-1996

    DOT National Transportation Integrated Search

    2002-08-16

    These reports are prepared for the U.S. Department of Energy (DOE), as required by DOE Order 5400.1 and DOE Order 231.1, by the Environmental Protection Department (EPD) at the Lawrence Livermore National Laboratory (LLNL). The results of LLNL's envi...

  6. Collaboration rules.

    PubMed

    Evans, Philip; Wolf, Bob

    2005-01-01

    Corporate leaders seeking to boost growth, learning, and innovation may find the answer in a surprising place: the Linux open-source software community. Linux is developed by an essentially volunteer, self-organizing community of thousands of programmers. Most leaders would sell their grandmothers for workforces that collaborate as efficiently, frictionlessly, and creatively as the self-styled Linux hackers. But Linux is software, and software is hardly a model for mainstream business. The authors have, nonetheless, found surprising parallels between the anarchistic, caffeinated, hirsute world of Linux hackers and the disciplined, tea-sipping, clean-cut world of Toyota engineering. Specifically, Toyota and Linux operate by rules that blend the self-organizing advantages of markets with the low transaction costs of hierarchies. In place of markets' cash and contracts and hierarchies' authority are rules about how individuals and groups work together (with rigorous discipline); how they communicate (widely and with granularity); and how leaders guide them toward a common goal (through example). Those rules, augmented by simple communication technologies and a lack of legal barriers to sharing information, create rich common knowledge, the ability to organize teams modularly, extraordinary motivation, and high levels of trust, which radically lowers transaction costs. Low transaction costs, in turn, make it profitable for organizations to perform more and smaller transactions--and so increase the pace and flexibility typical of high-performance organizations. Once the system achieves critical mass, it feeds on itself. The larger the system, the more broadly shared the knowledge, language, and work style. The greater individuals' reputational capital, the louder the applause and the stronger the motivation. The success of Linux is evidence of the power of that virtuous circle. Toyota's success is evidence that it is also powerful in conventional companies.

  7. Elan4/SPARC V9 Cross Loader and Dynamic Linker

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    anf Fabien Lebaillif-Delamare, Fabrizio Petrini

    2004-10-25

    The Elan4/Sparc V9 Croos Loader and Liner is a part of the Linux system software that allows the dynamic loading and linking of user code in the network interface Quadrics QsNETII, also called as Elan4 Quadrics. Elan44 uses a thread processor that is based on the assembly instruction set of the Sparc V9. All this software is integrated as a Linux kernel module in the Linux 2.6.5 release.

  8. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    DTIC Science & Technology

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  9. Parvalbumin and GAD65 Interneuron Inhibition in the Ventral Hippocampus Induces Distinct Behavioral Deficits Relevant to Schizophrenia

    PubMed Central

    Nguyen, Robin; Morrissey, Mark D.; Mahadevan, Vivek; Cajanding, Janine D.; Woodin, Melanie A.; Yeomans, John S.; Takehara-Nishiuchi, Kaori

    2014-01-01

    Hyperactivity within the ventral hippocampus (vHPC) has been linked to both psychosis in humans and behavioral deficits in animal models of schizophrenia. A local decrease in GABA-mediated inhibition, particularly involving parvalbumin (PV)-expressing GABA neurons, has been proposed as a key mechanism underlying this hyperactive state. However, direct evidence is lacking for a causal role of vHPC GABA neurons in behaviors associated with schizophrenia. Here, we probed the behavioral function of two different but overlapping populations of vHPC GABA neurons that express either PV or GAD65 by selectively inhibiting these neurons with the pharmacogenetic neuromodulator hM4D. We show that acute inhibition of vHPC GABA neurons in adult mice results in behavioral changes relevant to schizophrenia. Inhibiting either PV or GAD65 neurons produced distinct behavioral deficits. Inhibition of PV neurons, affecting ∼80% of the PV neuron population, robustly impaired prepulse inhibition of the acoustic startle reflex (PPI), startle reactivity, and spontaneous alternation, but did not affect locomotor activity. In contrast, inhibiting a heterogeneous population of GAD65 neurons, affecting ∼40% of PV neurons and 65% of cholecystokinin neurons, increased spontaneous and amphetamine-induced locomotor activity and reduced spontaneous alternation, but did not alter PPI. Inhibition of PV or GAD65 neurons also produced distinct changes in network oscillatory activity in the vHPC in vivo. Together, these findings establish a causal role for vHPC GABA neurons in controlling behaviors relevant to schizophrenia and suggest a functional dissociation between the GABAergic mechanisms involved in hippocampal modulation of sensorimotor processes. PMID:25378161

  10. Circadian time-place (or time-route) learning in rats with hippocampal lesions.

    PubMed

    Cole, Emily; Mistlberger, Ralph E; Merza, Devon; Trigiani, Lianne J; Madularu, Dan; Simundic, Amanda; Mumby, Dave G

    2016-12-01

    Circadian time-place learning (TPL) is the ability to remember both the place and biological time of day that a significant event occurred (e.g., food availability). This ability requires that a circadian clock provide phase information (a time tag) to cognitive systems involved in linking representations of an event with spatial reference memory. To date, it is unclear which neuronal substrates are critical in this process, but one candidate structure is the hippocampus (HPC). The HPC is essential for normal performance on tasks that require allocentric spatial memory and exhibits circadian rhythms of gene expression that are sensitive to meal timing. Using a novel TPL training procedure and enriched, multidimensional environment, we trained rats to locate a food reward that varied between two locations relative to time of day. After rats acquired the task, they received either HPC or SHAM lesions and were re-tested. Rats with HPC lesions were initially impaired on the task relative to SHAM rats, but re-attained high scores with continued testing. Probe tests revealed that the rats were not using an alternation strategy or relying on light-dark transitions to locate the food reward. We hypothesize that transient disruption and recovery reflect a switch from HPC-dependent allocentric navigation (learning places) to dorsal striatum-dependent egocentric spatial navigation (learning routes to a location). Whatever the navigation strategy, these results demonstrate that the HPC is not required for rats to find food in different locations using circadian phase as a discriminative cue. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Hippocampal-medial prefrontal circuit supports memory updating during learning and post-encoding rest

    PubMed Central

    Schlichting, Margaret L.; Preston, Alison R.

    2015-01-01

    Learning occurs in the context of existing memories. Encountering new information that relates to prior knowledge may trigger integration, whereby established memories are updated to incorporate new content. Here, we provide a critical test of recent theories suggesting hippocampal (HPC) and medial prefrontal (MPFC) involvement in integration, both during and immediately following encoding. Human participants with established memories for a set of initial (AB) associations underwent fMRI scanning during passive rest and encoding of new related (BC) and unrelated (XY) pairs. We show that HPC-MPFC functional coupling during learning was more predictive of trial-by-trial memory for associations related to prior knowledge relative to unrelated associations. Moreover, the degree to which HPC-MPFC functional coupling was enhanced following overlapping encoding was related to memory integration behavior across participants. We observed a dissociation between anterior and posterior MPFC, with integration signatures during post-encoding rest specifically in the posterior subregion. These results highlight the persistence of integration signatures into post-encoding periods, indicating continued processing of interrelated memories during rest. We also interrogated the coherence of white matter tracts to assess the hypothesis that integration behavior would be related to the integrity of the underlying anatomical pathways. Consistent with our predictions, more coherent HPC-MPFC white matter structure was associated with better performance across participants. This HPC-MPFC circuit also interacted with content-sensitive visual cortex during learning and rest, consistent with reinstatement of prior knowledge to enable updating. These results show that the HPC-MPFC circuit supports on- and offline integration of new content into memory. PMID:26608407

  12. Context memory formation requires activity-dependent protein degradation in the hippocampus.

    PubMed

    Cullen, Patrick K; Ferrara, Nicole C; Pullins, Shane E; Helmstetter, Fred J

    2017-11-01

    Numerous studies have indicated that the consolidation of contextual fear memories supported by an aversive outcome like footshock requires de novo protein synthesis as well as protein degradation mediated by the ubiquitin-proteasome system (UPS). Context memory formed in the absence of an aversive stimulus by simple exposure to a novel environment requires de novo protein synthesis in both the dorsal (dHPC) and ventral (vHPC) hippocampus. However, the role of UPS-mediated protein degradation in the consolidation of context memory in the absence of a strong aversive stimulus has not been investigated. In the present study, we used the context preexposure facilitation effect (CPFE) procedure, which allows for the dissociation of context learning from context-shock learning, to investigate the role of activity-dependent protein degradation in the dHPC and vHPC during the formation of a context memory. We report that blocking protein degradation with the proteasome inhibitor clasto-lactacystin β-lactone (βLac) or blocking protein synthesis with anisomycin (ANI) immediately after context preexposure significantly impaired context memory formation. Additionally, we examined 20S proteasome activity at different time points following context exposure and saw that the activity of proteasomes in the dHPC increases immediately after stimulus exposure while the vHPC exhibits a biphasic pattern of proteolytic activity. Taken together, these data suggest that the requirement of increased proteolysis during memory consolidation is not driven by processes triggered by the strong aversive outcome (i.e., shock) normally used to support fear conditioning. © 2017 Cullen et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Experimental investigation on high performance RC column with manufactured sand and silica fume

    NASA Astrophysics Data System (ADS)

    Shanmuga Priya, T.

    2017-11-01

    In recent years, the use High Performance Concrete (HPC) has increased in construction industry. The ingredients of HPC depend on the availability and characteristics of suitable alternative materials. Those alternative materials are silica fume and manufactured sand, a by products from ferro silicon and quarry industries respectively. HPC made with silica fume as partial replacement of cement and manufactured sand as replacement of natural sand is considered as sustainable high performance concrete. In this present study the concrete was designed to get target strength of 60 MPa as per guide lines given by ACI 211- 4R (2008). The laboratory study was carried out experimentally to analyse the axial behavior of reinforced cement HPC column of size 100×100×1000mm and square in cross section. 10% of silica fume was preferred over ordinary portland cement. The natural sand was replaced by 0, 20, 40, 60, 80 and 100% with Manufactured Sand (M-Sand). In this investigation, totally 6 column specimens were cast for mixes M1 to M6 and were tested in 1000kN loading frame at 28 days. From this, Load-Mid height deflection curves were drawn and compared. Maximum ultimate load carrying capacity and the least deflection is obtained for the mix prepared by partial replacement of cement with 10% silica fume & natural sand by 100% M-Sand. The fine, amorphous and pozzalonic nature of silica fume and fine mineral particles in M- Sand increased the stiffness of HPC column. The test results revealed that HPC can be produced by using M-Sand with silica fume.

  14. Construction of crack-free bridge decks : technical summary.

    DOT National Transportation Integrated Search

    2017-04-01

    The report documents the performance of the decks based on crack surveys performed on the LC-HPC decks and : matching control bridge decks. The specifications for LC-HPC bridge decks, which cover aggregates, concrete, : and construction procedures, a...

  15. 77 FR 30371 - Airworthiness Directives; International Aero Engines AG Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-23

    ... (USIs) of certain high-pressure compressor (HPC) stage 3 to 8 drums, and replacement of drum attachment... Condition This AD results from reports of 50 additional high-pressure compressor (HPC) stage 3 to 8 drums...

  16. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE PAGES

    Engelmann, Christian; Hukerikar, Saurabh

    2017-09-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  17. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Hukerikar, Saurabh

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  18. An integrated genetic data environment (GDE)-based LINUX interface for analysis of HIV-1 and other microbial sequences.

    PubMed

    De Oliveira, T; Miller, R; Tarin, M; Cassol, S

    2003-01-01

    Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).

  19. LLNL Center of Excellence Work Items for Q9-Q10 period

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neely, J. R.

    This work plan encompasses a slice of effort going on within the ASC program, and for projects utilizing COE vendor resources, describes work that will be performed by both LLNL staff and COE vendor staff collaboratively.

  20. ALPHA SMP SYSTEM(S) Final Report CRADA No. TC-1404-97

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seager, M.; Beaudet, T.

    Within the scope of this subcontract, Digital Equipment Corporation (DIGITAL) and the University, through the Lawrence Livermore National Laboratory (LLNL), engaged in joint research and development activities of mutual interest and benefit. The primary objectives of these activities were, for LLNL to improve its capability to perform its mission, and for DIGITAL to develop technical capability complimentary to this mission. The collaborative activities had direct manpower investments by DIGITAL and LLNL. The project was divided into four areas of concern, which were handled concurrently. These areas included Gang Scheduling, Numerical Methods, Applications Development and Code Development Tools.

  1. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S R; Bihari, B L; Salari, K

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  2. An Archive of Downscaled WCRP CMIP3 Climate Projections for Planning Applications in the Contiguous United States

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Pruitt, T.; Maurer, E. P.; Duffy, P. B.

    2007-12-01

    Incorporating climate change information into long-term evaluations of water and energy resources requires analysts to have access to climate projection data that have been spatially downscaled to "basin-relevant" resolution. This is necessary in order to develop system-specific hydrology and demand scenarios consistent with projected climate scenarios. Analysts currently have access to "climate model" resolution data (e.g., at LLNL PCMDI), but not spatially downscaled translations of these datasets. Motivated by a common interest in supporting regional and local assessments, the U.S. Bureau of Reclamation and LLNL (through support from the DOE National Energy Technology Laboratory) have teamed to develop an archive of downscaled climate projections (temperature and precipitation) with geographic coverage consistent with the North American Land Data Assimilation System domain, encompassing the contiguous United States. A web-based information service, hosted at LLNL Green Data Oasis, has been developed to provide Reclamation, LLNL, and other interested analysts free access to archive content. A contemporary statistical method was used to bias-correct and spatially disaggregate projection datasets, and was applied to 112 projections included in the WCRP CMIP3 multi-model dataset hosted by LLNL PCMDI (i.e. 16 GCMs and their multiple simulations of SRES A2, A1b, and B1 emissions pathways).

  3. LLNL Results from CALIBAN-PROSPERO Nuclear Accident Dosimetry Experiments in September 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobaugh, M. L.; Hickman, D. P.; Wong, C. W.

    2015-05-21

    Lawrence Livermore National Laboratory (LLNL) uses thin neutron activation foils, sulfur, and threshold energy shielding to determine neutron component doses and the total dose from neutrons in the event of a nuclear criticality accident. The dosimeter also uses a DOELAP accredited Panasonic UD-810 (Panasonic Industrial Devices Sales Company of America, 2 Riverfront Plaza, Newark, NJ 07102, U.S.A.) thermoluminescent dosimetery system (TLD) for determining the gamma component of the total dose. LLNL has participated in three international intercomparisons of nuclear accident dosimeters. In October 2009, LLNL participated in an exercise at the French Commissariat à l’énergie atomique et aux énergies alternativesmore » (Alternative Energies and Atomic Energy Commission- CEA) Research Center at Valduc utilizing the SILENE reactor (Hickman, et.al. 2010). In September 2010, LLNL participated in a second intercomparison at CEA Valduc, this time with exposures at the CALIBAN reactor (Hickman et al. 2011). This paper discusses LLNL’s results of a third intercomparison hosted by the French Institut de Radioprotection et de Sûreté Nucléaire (Institute for Radiation Protection and Nuclear Safety- IRSN) with exposures at two CEA Valduc reactors (CALIBAN and PROSPERO) in September 2014. Comparison results between the three participating facilities is presented elsewhere (Chevallier 2015; Duluc 2015).« less

  4. FY16 LLNL Omega Experimental Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heeter, R. F.; Ali, S. J.; Benstead, J.

    In FY16, LLNL’s High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall, these LLNL programs led 430 target shots in FY16, with 304 shots using just the OMEGA laser system, and 126 shots using just the EP laser system. Approximately 21% of the total number of shots (77 OMEGA shots and 14 EP shots) supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 79% (227 OMEGA shots and 112more » EP shots) were dedicated to experiments for High-Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports. In addition to these experiments, LLNL Principal Investigators led a variety of Laboratory Basic Science campaigns using OMEGA and EP, including 81 target shots using just OMEGA and 42 shots using just EP. The highlights of these are also summarized, following the ICF and HED campaigns. Overall, LLNL PIs led a total of 553 shots at LLE in FY 2016. In addition, LLNL PIs also supported 57 NLUF shots on Omega and 31 NLUF shots on EP, in collaboration with the academic community.« less

  5. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  6. Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System

    DTIC Science & Technology

    2004-09-01

    docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web

  7. Connecting to HPC Systems | High-Performance Computing | NREL

    Science.gov Websites

    one of the following methods, which use multi-factor authentication. First, you will need to set up If you just need access to a command line on an HPC system, use one of the following methods

  8. Direct SSH Gateway Access to Peregrine | High Performance Computing |

    Science.gov Websites

    can access peregrine-ssh.nrel.gov, you must have: An active NREL HPC user account (see User Accounts ) An OTP Token (see One Time Password Tokens) Logging into peregrine-ssh.nrel.gov With your HPC account

  9. CT imaging of malignant metastatic hemangiopericytoma of the parotid gland with histopathological correlation

    PubMed Central

    Khoo, James B.; Sittampalam, Kesavan; Chee, Soo K.

    2008-01-01

    Abstract We report an extremely rare case of malignant hemangiopericytoma (HPC) of the parotid gland and its metastatic spread to lung, liver, and skeletal muscle. Computed tomography (CT) imaging, histopathological and immunohistochemical methods were employed to study the features of malignant HPC and its metastases. CT imaging was helpful to determine the exact location, involvement of adjacent structures and vascularity, as well as evaluating pulmonary, hepatic, peritoneal, and muscular metastases. Immunohistochemical and histopatholgical features of the primary tumor as well as the metastases were consistent with the diagnosis of malignant HPC. PMID:18940737

  10. Prediction and characterization of application power use in a high-performance computing environment

    DOE PAGES

    Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...

    2017-02-27

    Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

  11. Final Report on the Proposal to Provide Asian Science and Technology Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahaner, David K.

    2003-07-23

    The Asian Technology Information Program (ATIP) conducted a seven-month Asian science and technology information program for the Office:of Energy Research (ER), U.S: Department of Energy (DOE.) The seven-month program consists of 1) monitoring, analyzing, and dissemiuating science and technology trends and developments associated with Asian high performance computing and communications (HPC), networking, and associated topics, 2) access to ATIP's annual series of Asian S&T reports for ER and HPC related personnel and, 3) supporting DOE and ER designated visits to Asia to study and assess Asian HPC.

  12. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  13. Role of Tat-interacting protein of 110 kDa and microRNAs in the regulation of hematopoiesis.

    PubMed

    Liu, Ying; He, Johnny J

    2016-07-01

    Hematopoiesis is regulated by cellular factors including transcription factors, microRNAs, and epigenetic modifiers. Understanding how these factors regulate hematopoiesis is pivotal for manipulating them to achieve their desired potential. In this review, we will focus on HIV-1 Tat-interacting protein of 110 kDa (Tip110) and its regulation of hematopoiesis. There are several pathways in hematopoiesis that involve Tip110 regulation. Tip110 is expressed in human cord blood CD34 cells; its expression decreases when CD34 cells begin to differentiate. Tip110 is also expressed in mouse marrow hematopoietic stem cells (HSC) and hematopoietic progenitor cells (HPC). Tip110 expression increases the number, survival, and cell cycling of HPC. Tip110-mediated regulation of hematopoiesis has been linked to its reciprocal control of proto-oncogene expression. Small noncoding microRNAs (miRs) have been shown to play important roles in regulation of hematopoiesis. miR-124 specifically targets 3'-untranslated region of Tip110 and subsequently regulates Tip110 expression in HSC. Our recent findings for manipulating expression levels of Tip110 in HSC and HPC could be useful for expanding HSC and HPC and for improving engraftment of cord blood HSC/HPC.

  14. Synergistic effect of Nitrogen-doped hierarchical porous carbon/graphene with enhanced catalytic performance for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Kong, Dewang; Yuan, Wenjing; Li, Cun; Song, Jiming; Xie, Anjian; Shen, Yuhua

    2017-01-01

    Developing efficient and economical catalysts for the oxygen reduction reaction (ORR) is important to promote the commercialization of fuel cells. Here, we report a simple and environmentally friendly method to prepare nitrogen (N) -doped hierarchical porous carbon (HPC)/reduced graphene oxide (RGO) composites by reusing waste biomass (pomelo peel) coupled with graphene oxide (GO). This method is green, low-cost and without using any acid or alkali activator. The typical sample (N-HPC/RGO-1) contains 5.96 at.% nitrogen and larger BET surface area (1194 m2/g). Electrochemical measurements show that N-HPC/RGO-1 exhibits not only a relatively positive onset potential and high current density, but also considerable methanol tolerance and long-term durability in alkaline media as well as in acidic media. The electron transfer number is close to 4, which means that it is mostly via a four-electron pathway toward ORR. The excellent catalytic performance of N-HPC/RGO-1 is due to the synergistic effect of the inherent interwoven network structure of HPC, the good electrical conductivity of RGO, and the heteroatom doping for the composite. More importantly, this work demonstrates a good example for turning discarded rubbish into valuable functional products and addresses the disposal issue of waste biomass simultaneously for environment clean.

  15. Analysis of hematopoietic recovery after autologous transplantation as method of quality control for long-term progenitor cell cryopreservation.

    PubMed

    Pavlů, J; Auner, H W; Szydlo, R M; Sevillano, B; Palani, R; O'Boyle, F; Chaidos, A; Jakob, C; Kanfer, E; MacDonald, D; Milojkovic, D; Rahemtulla, A; Bradshaw, A; Olavarria, E; Apperley, J F; Pello, O M

    2017-12-01

    Hematopoietic precursor cells (HPC) are able to restore hematopoiesis after high-dose chemotherapy and their cryopreservation is routinely employed prior to the autologous hematopoietic cell transplantation (AHCT). Although previous studies showed feasibility of long-term HPC storage, concerns remain about possible negative effects on their potency. To study the effects of long-term cryopreservation, we compared time to neutrophil and platelet recovery in 50 patients receiving two AHCT for multiple myeloma at least 2 years apart between 2006 and 2016, using HPC obtained from one mobilization and collection attempt before the first transplant. This product was divided into equivalent fractions allowing a minimum of 2 × 10 6 CD34+ cells/kg recipient's weight. One fraction was used for the first transplant after median storage of 60 days (range, 17-165) and another fraction was used after median storage of 1448 days (range, 849-3510) at the second AHCT. Neutrophil recovery occurred at 14 days (median; range, 11-21) after the first and 13 days (10-20) after the second AHCT. Platelets recovered at a median of 16 days after both procedures. Considering other factors, such as disease status, conditioning and HPC dose, this single institution data demonstrated no reduction in the potency of HPC after long-term storage.

  16. Freestanding hierarchically porous carbon framework decorated by polyaniline as binder-free electrodes for high performance supercapacitors

    NASA Astrophysics Data System (ADS)

    Miao, Fujun; Shao, Changlu; Li, Xinghua; Wang, Kexin; Lu, Na; Liu, Yichun

    2016-10-01

    Freestanding hierarchically porous carbon electrode materials with favorable features of large surface areas, hierarchical porosity and continuous conducting pathways are very attractive for practical applications in electrochemical devices. Herein, three-dimensional freestanding hierarchically porous carbon (HPC) materials have been fabricated successfully mainly by the facile phase separation method. In order to further improve the energy storage ability, polyaniline (PANI) with high pseudocapacitance has been decorated on HPC through in situ chemical polymerization of aniline monomers. Benefiting from the synergistic effects between HPC and PANI, the resulting HPC/PANI composites as electrode materials present dramatic electrochemical performance with high specific capacitance up to 290 F g-1 at 0.5 A g-1 and good rate capability with ∼86% (248 F g-1) capacitance retention at 64 A g-1 of initial capacitance in three-electrode configuration. Moreover, the as-assembled symmetric supercapacitor based on HPC/PANI composites also demonstrates good capacitive properties with high energy density of 9.6 Wh kg-1 at 223 W kg-1 and long-term cycling stability with 78% capacitance retention after 10 000 cycles. Therefore, this work provides a new approach for designing high-performance electrodes with exceptional electrochemical performance, which are very promising for practical application in the energy storage field.

  17. Hydroxypropyl cellulose methacrylate as a photo-patternable and biodegradable hybrid paper substrate for cell culture and other bioapplications.

    PubMed

    Qi, Aisha; Hoo, Siew Pei; Friend, James; Yeo, Leslie; Yue, Zhilian; Chan, Peggy P Y

    2014-04-01

    In addition to the choice of appropriate material properties of the tissue construct to be used, such as its biocompatibility, biodegradability, cytocompatibility, and mechanical rigidity, the ability to incorporate microarchitectural patterns in the construct to mimic that found in the cellular microenvironment is an important consideration in tissue engineering and regenerative medicine. Both these issues are addressed by demonstrating a method for preparing biodegradable and photo-patternable constructs, where modified cellulose is cross-linked to form an insoluble structure in an aqueous environment. Specifically, hydroxypropyl cellulose (HPC) is rendered photocrosslinkable by grafting with methylacrylic anhydride, whose linkages also render the cross-linked construct hydrolytically degradable. The HPC is then cross-linked via a photolithography-based fabrication process. The feasibility of functionalizing these HPC structures with biochemical cues is verified post-fabrication, and shown to facilitate the adhesion of mesenchymal progenitor cells. The HPC constructs are shown to be biocompatible and hydrolytically degradable, thus enabling cell proliferation and cell migration, and therefore constituting an ideal candidate for long-term cell culture and implantable tissue scaffold applications. In addition, the potential of the HPC structure is demonstrated as an alternative substrate to paper microfluidic diagnostic devices for protein and cell assays. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Metabolism of 17α-hydroxyprogesterone caproate by hepatic and placental microsomes of human and baboons

    PubMed Central

    Yan, Ru; Nanovskaya, Tatiana N.; Zharikova, Olga L.; Mattison, Donald R.; Hankins, Gary D.V.; Ahmed, Mahmoud S.

    2008-01-01

    Recent data from our laboratory revealed the formation of an unknown metabolite of 17 hydroxyprogestrone caproate (17-HPC), used for treatment of preterm deliveries, during its perfusion across the dually perfused human placental lobule. Previously, we demonstrated that the drug is not hydrolyzed, neither in vivo nor in vitro, to progesterone and caproate. Therefore, the hypothesis for this investigation is that 17-HPC is actively metabolized by human and baboon (Papio cynocephalus) hepatic and placental microsomes. Baboon hepatic and placental microsomes were investigated to validate the nonhuman primate as an animal model for drug use during pregnancy. Data presented here indicate that human and baboon hepatic microsomes formed several mono-, di-, and tri-hydroxylated derivatives of 17-HPC. However, microsomes of human and baboon placentas metabolized 17-HPC to its mono-hydroxylated derivatives only in quantities that were a fraction of those formed by their respective livers, except for two metabolites (M16’ and M17’) that are unique for placenta and contributed to 25% and 75% of the total metabolites formed by human and baboon, respectively. The amounts of metabolites formed, relative to each other, by human and baboon microsomes were different suggesting that the affinity of 17-HPC to CYP enzymes and their activity could be species-dependent. PMID:18329004

  19. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  20. The role of the hippocampus in approach-avoidance conflict decision-making: Evidence from rodent and human studies.

    PubMed

    Ito, Rutsuko; Lee, Andy C H

    2016-10-15

    The hippocampus (HPC) has been traditionally considered to subserve mnemonic processing and spatial cognition. Over the past decade, however, there has been increasing interest in its contributions to processes beyond these two domains. One question is whether the HPC plays an important role in decision-making under conditions of high approach-avoidance conflict, a scenario that arises when a goal stimulus is simultaneously associated with reward and punishment. This idea has its origins in rodent work conducted in the 1950s and 1960s, and has recently experienced a resurgence of interest in the literature. In this review, we will first provide an overview of classic rodent lesion data that first suggested a role for the HPC in approach-avoidance conflict processing and then proceed to describe a wide range of more recent evidence from studies conducted in rodents and humans. We will demonstrate that there is substantial, converging cross-species evidence to support the idea that the HPC, in particular the ventral (in rodents)/anterior (in humans) portion, contributes to approach-avoidance conflict decision making. Furthermore, we suggest that the seemingly disparate functions of the HPC (e.g. memory, spatial cognition, conflict processing) need not be mutually exclusive. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E. V.; Nebeker, S.

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF onmore » this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.« less

  2. End-of-life care for immigrants in Germany. An epidemiological appraisal of Berlin.

    PubMed

    Henke, Antje; Thuss-Patience, Peter; Behzadi, Asita; Henke, Oliver

    2017-01-01

    Since the late 1950's, a steadily increasing immigrant population in Germany is resulting in a subpopulation of aging immigrants. The German health care system needs to adjust its services-linguistically, culturally, and medically-for this subpopulation of patients. Immigrants make up over 20% of the population in Germany, yet the majority receive inadequate medical care. As many of the labor immigrants of the 1960s and 1970s are in need of hospice and palliative care (HPC), little is known about this specialized care for immigrants. This epidemiological study presents utilization of HPC facilities in Berlin with a focus on different immigrant groups. A validated questionnaire was used to collect data from patients at 34 HPC institutions in Berlin over 20 months. All newly admitted patients were recruited. Anonymized data were coded and analyzed by using SPSS and compared with the population statistics of Berlin. 4118 questionnaires were completed and included in the analysis. At 11.4% the proportion of immigrants accessing HPC was significantly (p<0,001) below their proportion in the general Berlin population. This difference was especially seen in the age groups of 51-60 (21.46% immigrants in Berlin population, 17.7% immigrants in HPC population) and 61-70 years (16,9% vs. 13,1%). The largest ethnic groups are Turks, Russians, and Poles, with a different weighting than in the general population: Turkish immigrants were 24% of all Berlin immigrants, but only 13.6% of the study immigrant population (OR: 0.23, 95%CI: 0.18-0.29, p<0.001). Russian and Polish immigrants account for 5.6% and 9.2% in the population, but 11.5% and 24.8% in the study population respectively (Russian: OR 0.88, 95%CI: 0.66-1.16; Polish: OR 1.17, 95%CI: 0.97-1.42). Palliative care wards (PC) were used most often (16.7% immigrants of all PC patients); outpatient hospice services were used least often by immigrants (11.4%). Median age at first admission to HPC was younger in immigrants than non-immigrants: 61-70 vs. 71-80, p = 0.03. Immigrants are underrepresented in Berlin´s HPC and immigrants on average make use of care at a younger age than non-immigrants. In this regard, Turkish immigrants in particular have the poorest utilization of HPC. These results should prompt research on Turkish immigrants, regarding access barriers, since they represent the largest immigrant group. This may be due to a lack of cultural sensitivity of the care-providers and a lack of knowledge about HPC among immigrants. In the comparison of the kinds of institutions, immigrants are less likely to access outpatient hospice services compared to PC. Apparently, PC appear to be a smaller hurdle for utilization. These results show a non-existent, but oft-cited "healthy immigrant effect" of the first generation of work immigrants, now entering old age. These findings correspond with studies suggesting increased health concerns in immigrants. Focused research is needed to promote efforts in providing adequate and fair access to HPC for all people in Berlin.

  3. Test plan : Branson TRIP travel time/data accuracy

    DOT National Transportation Integrated Search

    2000-04-01

    In the mid 1990's the FHWA established a High Performance Concrete (HPC) program aimed at demonstrating the positive effects of utilizing HPC in bridges. Research on the benefits of using high performance concrete for bridges has shown a number of be...

  4. Fatigue and shear behavior of HPC bulb tee girders : LTRC technical summary report.

    DOT National Transportation Integrated Search

    2008-04-01

    The objectives of the research were (1) to provide assurance that full size, deep prestressed concrete girders made with HPC would perform satisfactorily under flexural fatigue, static shear, and static flexural loading conditions; (2) to determine i...

  5. Long-term monitoring of the HPC Charenton Canal Bridge : tech summary.

    DOT National Transportation Integrated Search

    2011-08-01

    In 1997, the Louisiana Department of Transportation and Development (LADOTD) began to design the : Charenton Canal Bridge using HPC for both the superstructure and the substructure. As a part of the project, : a research contract was awarded to assis...

  6. Long-term monitoring of the HPC Charenton Canal Bridge.

    DOT National Transportation Integrated Search

    2011-08-01

    The report contains long-term monitoring data collection and analysis of the first fully high : performance concrete (HPC) bridge in Louisiana, the Charenton Canal Bridge. The design of this : bridge started in 1997, and it was built and opened to tr...

  7. Kevin Regimbal | NREL

    Science.gov Websites

    -275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center

  8. Design and performance of crack-free environmentally friendly concrete "crack-free eco-crete".

    DOT National Transportation Integrated Search

    2014-08-01

    High-performance concrete (HPC) is characterized by high content of cement and supplementary cementitious materials (SCMs). : Using high binder content, low water-to-cementitious material ratio (w/cm), and various chemical admixtures in the HPC can r...

  9. HEP Computing

    Science.gov Websites

    Computing Visitors who do not need a HEP linux account Visitors with laptops can use wireless network HEP linux account Step 1: Click Here for New Account Application After submitting the application, you

  10. Spherical harmonic results for the 3D Kobayashi Benchmark suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P N; Chang, B; Hanebutte, U R

    1999-03-02

    Spherical harmonic solutions are presented for the Kobayashi benchmark suite. The results were obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.

  11. Advancing Your Career at LLNL: Meet NIF’s Radiation Control Technicians

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarco, Judy; Gutierrez, Myrna; Beale, Richard

    2017-04-26

    Myrna Gutierrez and Judy Zarco took advantage of LLNL's legacy of encouraging continuing education to get the necessary degrees and training to advance their careers at the Lab. As Radiation Control Technicians, they help maintain safety at the National Ignition Facility.

  12. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  13. Predictive Model and Methodology for Heat Treatment Distortion Final Report CRADA No. TC-298-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikkel, D. J.; McCabe, J.

    This project was a multi-lab, multi-partner CRADA involving LLNL, Los Alamos National Laboratory, Sandia National Laboratories, Oak Ridge National Laboratory, Martin Marietta Energy Systems and the industrial partner, The National Center of Manufacturing Sciences (NCMS). A number of member companies of NCMS participated including General Motors Corporation, Ford Motor Company, The Torrington Company, Gear Research, the Illinois Institute of Technology Research Institute, and Deformation Control Technology •. LLNL was the lead laboratory for metrology technology used for validation of the computational tool/methodology. LLNL was also the lead laboratory for the development of the software user interface , for the computationalmore » tool. This report focuses on the participation of LLNL and NCMS. The purpose of the project was to develop a computational tool/methodology that engineers would use to predict the effects of heat treatment on the _size and shape of industrial parts made of quench hardenable alloys. Initially, the target application of the tool was gears for automotive power trains.« less

  14. Alkylphosphocholines: influence of structural variation on biodistribution at antineoplastically active concentrations.

    PubMed

    Kötting, J; Berger, M R; Unger, C; Eibl, H

    1992-01-01

    Hexadecylphosphocholine (HPC) and octadecylphosphocholine (OPC) show very potent antitumor activity against autochthonous methylnitrosourea-induced mammary carcinomas in rats. The longer-chain and unsaturated homologue erucylphosphocholine (EPC) forms lamellar structures rather than micelles, but nonetheless exhibits antineoplastic activity. Methylnitrosourea was used in the present study to induce autochthonous mammary carcinomas in virgin Sprague-Dawley rats. At 6 and 11 days following oral therapy, the biodistribution of HPC, OPC and EPC was analyzed in the serum, tumor, liver, kidney, lung, small intestine, brain and spleen of rats by high-performance thin-layer chromatography. In contrast to the almost identical tumor response noted, the distribution of the three homologues differed markedly. The serum levels of 50 nmol/ml obtained for OPC and EPC were much lower than the value of 120 nmol/ml measured for HPC. Nevertheless, the quite different serum levels resulted in similar tumor concentrations of about 200 nmol/g for all three of the compounds. Whereas HPC preferably accumulated in the kidney (1 mumol/g), OPC was found at increased concentrations (400 nmol/g) in the spleen, kidney and lung. In spite of the high daily dose of 120 mumol/kg EPC as compared with 51 mumol/kg HPC or OPC, EPC concentrations (100-200 nmol/g) were low in most tissues. High EPC concentrations were found in the small intestine (628 nmol/g). Values of 170 nmol/g were found for HPC and OPC in the brain, whereas the EPC concentration was 120 nmol/g. Obviously, structural modifications in the alkyl chain strongly influence the distribution pattern of alkylphosphocholines in animals. Since EPC yielded the highest tissue-to-serum concentration ratio in tumor tissue (5.1) and the lowest levels in other organs, we conclude that EPC is the most promising candidate for drug development in cancer therapy.

  15. Impact of the early detection of esophageal neoplasms in hypopharyngeal cancer patients treated with concurrent chemoradiotherapy.

    PubMed

    Watanabe, Shigenobu; Ogino, Ichiro; Inayama, Yoshiaki; Sugiura, Madoka; Sakuma, Yasunori; Kokawa, Atsushi; Kunisaki, Chikara; Inoue, Tomio

    2017-04-01

    We examined the risk factors and prognostic factors for synchronous esophageal neoplasia (SEN) by comparing the characteristics of hypopharyngeal cancer (HPC) patients with and without SEN. We examined 183 patients who were treated with definitive radiotherapy for HPC. Lugol chromoendoscopy screening of the esophagus was performed in all patients before chemoradiotherapy. Thirty-six patients had SEN, 49 patients died of HPC and two died of esophageal cancer. The patients with SEN exhibited significantly higher alcohol consumption than those without SEN (P = 0.018). The 5-year overall survival (OS) rate of the 36 patients with SEN was lower than that of the other patients (36.2% vs 63.4%, P = 0.006). The SEN patients exhibited significantly shorter HPC cause-specific survival than the other patients (P = 0.039). Both the OS (P = 0.005) and the HPC cause-specific survival (P = 0.026) of the patients with SEN were significantly shorter than those of the patients without SEN in multivariate analysis. Category 4/T1 stage esophageal cancer was treated with concurrent chemoradiotherapy (CCRT), endoscopic treatment or chemotherapy. The 5-year survival rates for esophageal cancer recurrence for CCRT, endoscopic treatment and chemotherapy were 71.5, 43.7 and 0%, respectively. The median (range) survival time (months) of CCRT, endoscopic treatment and chemotherapy was 22.7 (7.5-90.6), 46.44 (17.3-136.7) and 7.98 (3.72-22.8), respectively. Advanced HPC patients with SEN might have a poorer prognosis than those without SEN even when the esophageal cancer is detected early and managed appropriately. © 2014 Wiley Publishing Asia Pty Ltd.

  16. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  17. Enabling parallel simulation of large-scale HPC network systems

    DOE PAGES

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...

    2016-04-07

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  18. Enabling parallel simulation of large-scale HPC network systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  19. Behavioral characterization of cereblon forebrain-specific conditional null mice: A model for human non-syndromic intellectual disability

    PubMed Central

    Rajadhyaksha, Anjali M.; Ra, Stephen; Kishinevsky, Sarah; Lee, Anni S.; Romanienko, Peter; DuBoff, Mariel; Yang, Chingwen; Zupan, Bojana; Byrne, Maureen; Daruwalla, Zeeba R.; Mark, Willie; Kosofsky, Barry E.; Toth, Miklos; Higgins, Joseph J.

    2018-01-01

    A nonsense mutation in the human cereblon gene (CRBN) causes a mild type of autosomal recessive non-syndromic intellectual disability (ID). Animal studies show that crbn is a cytosolic protein with abundant expression in the hippocampus (HPC) and neocortex (CTX). Its diverse functions include the developmental regulation of ion channels at the neuronal synapse, the mediation of developmental programs by ubiquitination, and a target for herpes simplex type I virus in HPC neurons. To test the hypothesis that anomalous CRBN expression leads to HPC-mediated memory and learning deficits, we generated germ-line crbn knock-out mice (crbn−/−). We also inactivated crbn in forebrain neurons in conditional knock-out mice in which crbn exons 3 and 4 are deleted by cre recombinase under the direction of the Ca2+/calmodulin-dependent protein kinase II alpha promoter (CamKIIcre/+, crbn−/−). crbn mRNA levels were negligible in the HPC, CTX, and cerebellum (CRBM) of the crbn−/− mice. In contrast, crbn mRNA levels were reduced 3- to 4-fold in the HPC, CTX but not in the CRBM in CamKIIcre/+, crbn−/− mice as compared to wild type (CamKIIcre/+, crbn+/+). Contextual fear conditioning showed a significant decrease in the percentage of freezing time in CamKIIcre/+, crbn−/− and crbn−/− mice while motor function, exploratory motivation, and anxiety-related behaviors were normal. These findings suggest that CamKIIcre/+, crbn−/− mice exhibit selective HPC-dependent deficits in associative learning and supports the use of these mice as in vivo models to study the functional consequences of CRBN aberrations on memory and learning in humans. PMID:21995942

  20. Differential Acetylcholine Release in the Prefrontal Cortex and Hippocampus During Pavlovian Trace and Delay Conditioning

    PubMed Central

    Flesher, M. Melissa; Butt, Allen E.; Kinney-Hurd, Brandee L.

    2011-01-01

    Pavlovian trace conditioning critically depends on the medial prefrontal cortex (mPFC) and hippocampus (HPC), whereas delay conditioning does not depend on these brain structures. Given that the cholinergic basal forebrain system modulates activity in both the mPFC and HPC, it was reasoned that the level of acetylcholine (ACh) release in these regions would show distinct profiles during testing in trace and delay conditioning paradigms. To test this assumption, microdialysis probes were implanted unilaterally into the mPFC and HPC of rats that were pre-trained in appetitive trace and delay conditioning paradigms using different conditional stimuli in the two tasks. On the day of microdialysis testing, dialysate samples were collected during a quiet baseline interval before trials were initiated, and again during performance in separate blocks of trace and delay conditioning trials in each animal. ACh levels were quantified using high performance liquid chromatography and electrochemical detection techniques. Consistent with our hypothesis, results showed that ACh release in the mPFC was greater during trace conditioning than during delay conditioning. The level of ACh released during trace conditioning in the HPC was also greater than the levels observed during delay conditioning. While ACh efflux in both the mPFC and HPC selectively increased during trace conditioning, ACh levels in the mPFC during trace conditioning testing showed the greatest increases observed. These results demonstrate a dissociation in cholinergic activation of the mPFC and HPC during performance in trace but not delay appetitive conditioning, where this cholinergic activity may contribute to attentional mechanisms, adaptive response timing, or memory consolidation necessary for successful trace conditioning. PMID:21514394

  1. Comprehensive Angular Response Study of LLNL Panasonic Dosimeter Configurations and Artificial Intelligence Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, D. K.

    In April of 2016, the Lawrence Livermore National Laboratory External Dosimetry Program underwent a Department of Energy Laboratory Accreditation Program (DOELAP) on-site assessment. The assessment reported a concern that the study performed in 2013 Angular Dependence Study Panasonic UD-802 and UD-810 Dosimeters LLNL Artificial Intelligence Algorithm was incomplete. Only the responses at ±60° and 0° were evaluated and independent data from dosimeters was not used to evaluate the algorithm. Additionally, other configurations of LLNL dosimeters were not considered in this study. This includes nuclear accident dosimeters (NAD) which are placed in the wells surrounding the TLD in the dosimeter holder.

  2. Michael M. May: Working toward solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M.M.

    1993-07-01

    As part of LLNL's 40th anniversary celebration held during 1992, the six former Directors were asked to participate in a lecture series. Each of these men contributed in important ways toward making the Lawrence Livermore National Laboratory (LLNL) what it has become today. Each was asked to comment on some of the Laboratory's accomplishments, his career here, his view of the changing world, and where he sees the Laboratory going in the future. Michael M. May, LLNL's fifth Director and now a Director Emeritus, comments on a broad range of issues including arms control, nonproliferation, cooperative security, and the futuremore » role of the Laboratory.« less

  3. Optics & Materials Science & Technology (OMST) Organization at LLNL

    ScienceCinema

    Suratwala,; Tayyab,; Nguyen, Hoang; Bude, Jeff; Dylla-Spears, Rebecca

    2018-06-13

    The Optics and Materials Science & Technology (OMST) organization at Lawrence Livermore National Laboratory (LLNL) supplies optics, recycles optics, and performs the materials science and technology to advance optics and optical materials for high-power and high-energy lasers for a variety of missions. The organization is a core capability at LLNL. We have a strong partnership with many optical fabricators, universities and national laboratories to accomplish our goals. The organization has a long history of performing fundamental optical materials science, developing them into useful technologies, and transferring them into production both on-site and off-site. We are successfully continuing this same strategy today.

  4. Optics & Materials Science & Technology (OMST) Organization at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suratwala,; Tayyab,; Nguyen, Hoang

    The Optics and Materials Science & Technology (OMST) organization at Lawrence Livermore National Laboratory (LLNL) supplies optics, recycles optics, and performs the materials science and technology to advance optics and optical materials for high-power and high-energy lasers for a variety of missions. The organization is a core capability at LLNL. We have a strong partnership with many optical fabricators, universities and national laboratories to accomplish our goals. The organization has a long history of performing fundamental optical materials science, developing them into useful technologies, and transferring them into production both on-site and off-site. We are successfully continuing this same strategymore » today.« less

  5. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  6. Investigation into shrinkage of high-performance concrete used for Iowa bridge decks and overlays.

    DOT National Transportation Integrated Search

    2013-09-01

    High-performance concrete (HPC) overlays have been used increasingly as an effective and economical method for bridge decks in Iowa and other states. However, due to its high cementitious material content, HPC often displays high shrinkage cracking p...

  7. 75 FR 67253 - Airworthiness Directives; Pratt & Whitney (PW) Models PW4074 and PW4077 Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-02

    ... high-pressure compressor (HPC) disks, part number (P/N) 55H615, installed. This proposed AD would... & Whitney (PW) PW4074 and PW4077 turbofan engines with 15th stage high-pressure compressor (HPC) disks, part...

  8. Economical and crack-free high-performance concrete for pavement and transportation infrastructure construction.

    DOT National Transportation Integrated Search

    2017-05-01

    The main objective of this research is to develop and validate the behavior of a new class of environmentally friendly and costeffective : high-performance concrete (HPC) referred to herein as Eco-HPC. The proposed project aimed at developing two cla...

  9. HETEROTROPHIC PLATE COUNT BACTERIA - WHAT IS THEIR SIGNIFICANCE IN DRINKING WATER?

    EPA Science Inventory

    The possible health significance of heterotrophic plate count (HPC) bacteria, also know in earlier terminology as standard plate count (SPC) bacteria, in drinking water has been debated for decades. While the literature documents the universal occurrence of HPC bacteria in soil, ...

  10. Preparation and Characterization of All-Biomass Soy Protein Isolate-Based Films Enhanced by Epoxy Castor Oil Acid Sodium and Hydroxypropyl Cellulose

    PubMed Central

    Wang, La; Li, Jianzhang; Zhang, Shifeng; Shi, Junyou

    2016-01-01

    All-biomass soy protein-based films were prepared using soy protein isolate (SPI), glycerol, hydroxypropyl cellulose (HPC) and epoxy castor oil acid sodium (ECOS). The effect of the incorporated HPC and ECOS on the properties of the SPI film was investigated. The experimental results showed that the tensile strength of the resultant films increased from 2.84 MPa (control) to 4.04 MPa and the elongation at break increased by 22.7% when the SPI was modified with 2% HPC and 10% ECOS. The increased tensile strength resulted from the reaction between the ECOS and SPI, which was confirmed by attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR), scanning electron microscopy (SEM) and X-ray diffraction analysis (XRD). It was found that ECOS and HPC effectively improved the performance of SPI-based films, which can provide a new method for preparing environmentally-friendly polymer films for a number of commercial applications. PMID:28773320

  11. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  12. "Cor Occidere": a novel strategy of targeting the tumor core by radiosurgery in a radio- and chemo-resistant intracranial hemangiopericytoma.

    PubMed

    Li, You Quan; Chua, Eu Tiong; Chua, Kevin L M; Chua, Melvin L K

    2018-02-01

    Intracranial hemangiopericytomas (HPC) are chemotherapy- and radiotherapy (RT)-resistant. Here, we report on a novel stereotactic radiosurgery (SRS) technique-"Cor Occidere" (Latin), as a potential strategy of overcoming radioresistance of HPC. A 36-year old female presented to our clinic for consideration of a 3rd-course of RT for her recurrent cavernous sinus HPC, following previous cranial RT at 13 and 5 years prior, and a failed 9 months trial of bevacizumab/temozolomide. The tumor-adjacent brain stem and carotid artery risked substantial damage given the cumulative RT doses to these organs. We therefore designed an SRS plan targeting only the tumor core with 16 Gy single-fraction. Despite underdosing the tumor margin, we achieved stable disease over 25 months, contrasting her responses to systemic therapies. Achieving tumor control despite a suboptimal treatment that utilized high dose ablation of the tumor core suggests novel biological mechanisms to overcome radioresistance of HPC.

  13. High-performance computing with quantum processing units

    DOE PAGES

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...

    2017-03-01

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  14. Preparation and Characterization of All-Biomass Soy Protein Isolate-Based Films Enhanced by Epoxy Castor Oil Acid Sodium and Hydroxypropyl Cellulose.

    PubMed

    Wang, La; Li, Jianzhang; Zhang, Shifeng; Shi, Junyou

    2016-03-15

    All-biomass soy protein-based films were prepared using soy protein isolate (SPI), glycerol, hydroxypropyl cellulose (HPC) and epoxy castor oil acid sodium (ECOS). The effect of the incorporated HPC and ECOS on the properties of the SPI film was investigated. The experimental results showed that the tensile strength of the resultant films increased from 2.84 MPa (control) to 4.04 MPa and the elongation at break increased by 22.7% when the SPI was modified with 2% HPC and 10% ECOS. The increased tensile strength resulted from the reaction between the ECOS and SPI, which was confirmed by attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR), scanning electron microscopy (SEM) and X-ray diffraction analysis (XRD). It was found that ECOS and HPC effectively improved the performance of SPI-based films, which can provide a new method for preparing environmentally-friendly polymer films for a number of commercial applications.

  15. High-performance computing with quantum processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  16. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  17. What Physicists Should Know About High Performance Computing - Circa 2002

    NASA Astrophysics Data System (ADS)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  18. Continuous whole-system monitoring toward rapid understanding of production HPC applications and systems

    DOE PAGES

    Agelastos, Anthony; Allan, Benjamin; Brandt, Jim; ...

    2016-05-18

    A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less

  19. Development of a HIPAA-compliant environment for translational research data and analytics.

    PubMed

    Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C

    2014-01-01

    High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.

  20. Hierarchically Porous Carbon Materials for CO 2 Capture: The Role of Pore Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estevez, Luis; Barpaga, Dushyant; Zheng, Jian

    2018-01-17

    With advances in porous carbon synthesis techniques, hierarchically porous carbon (HPC) materials are being utilized as relatively new porous carbon sorbents for CO2 capture applications. These HPC materials were used as a platform to prepare samples with differing textural properties and morphologies to elucidate structure-property relationships. It was found that high microporous content, rather than overall surface area was of primary importance for predicting good CO2 capture performance. Two HPC materials were analyzed, each with near identical high surface area (~2700 m2/g) and colossally high pore volume (~10 cm3/g), but with different microporous content and pore size distributions, which ledmore » to dramatically different CO2 capture performance. Overall, large pore volumes obtained from distinct mesopores were found to significantly impact adsorption performance. From these results, an optimized HPC material was synthesized that achieved a high CO2 capacity of ~3.7 mmol/g at 25°C and 1 bar.« less

  1. Hippocampus-driven feed-forward inhibition of the prefrontal cortex mediates relapse of extinguished fear.

    PubMed

    Marek, Roger; Jin, Jingji; Goode, Travis D; Giustino, Thomas F; Wang, Qian; Acca, Gillian M; Holehonnur, Roopashri; Ploski, Jonathan E; Fitzgerald, Paul J; Lynagh, Timothy; Lynch, Joseph W; Maren, Stephen; Sah, Pankaj

    2018-03-01

    The medial prefrontal cortex (mPFC) has been implicated in the extinction of emotional memories, including conditioned fear. We found that ventral hippocampal (vHPC) projections to the infralimbic (IL) cortex recruited parvalbumin-expressing interneurons to counter the expression of extinguished fear and promote fear relapse. Whole-cell recordings ex vivo revealed that optogenetic activation of vHPC input to amygdala-projecting pyramidal neurons in the IL was dominated by feed-forward inhibition. Selectively silencing parvalbumin-expressing, but not somatostatin-expressing, interneurons in the IL eliminated vHPC-mediated inhibition. In behaving rats, pharmacogenetic activation of vHPC→IL projections impaired extinction recall, whereas silencing IL projectors diminished fear renewal. Intra-IL infusion of GABA receptor agonists or antagonists, respectively, reproduced these effects. Together, our findings describe a previously unknown circuit mechanism for the contextual control of fear, and indicate that vHPC-mediated inhibition of IL is an essential neural substrate for fear relapse.

  2. Educational Revolution on the Reservation: A Working Model.

    ERIC Educational Resources Information Center

    Murphy, Pete

    1993-01-01

    Since 1986, Navajo Community College (NCC) and Lawrence Livermore National Laboratory (LLNL) have collaborated to improve science and technical education on the Navajo Reservation through equipment loans, faculty exchanges, summer student work at LLNL, scholarships for NCC students, summer workshops for elementary science teachers, and classroom…

  3. Fusion/Astrophysics Teacher Research Academy

    NASA Astrophysics Data System (ADS)

    Correll, Donald

    2005-10-01

    In order to engage California high school science teachers in the area of plasma physics and fusion research, LLNL's Fusion Energy Program has partnered with the UC Davis Edward Teller Education Center, ETEC (http://etec.ucdavis.edu), the Stanford University Solar Center (http://solar-center.stanford.edu) and LLNL's Science / Technology Education Program, STEP (http://education.llnl.gov). A four-level ``Fusion & Astrophysics Research Academy'' has been designed to give teachers experience in conducting research using spectroscopy with their students. Spectroscopy, and its relationship to atomic physics and electromagnetism, provides for an ideal plasma `bridge' to the CA Science Education Standards (http://www.cde.ca.gov/be/st/ss/scphysics.asp). Teachers attend multiple-day professional development workshops to explore new research activities for use in the high school science classroom. A Level I, 3-day program consists of two days where teachers learn how plasma researchers use spectrometers followed by instructions on how to use a research grade spectrometer for their own investigations. A 3rd day includes touring LLNL's SSPX (http://www.mfescience.org/sspx/) facility to see spectrometry being used to measure plasma properties. Spectrometry classroom kits are made available for loaning to participating teachers. Level I workshop results (http://education.llnl.gov/fusion&_slash;astro/) will be presented along with plans being developed for Level II (one week advanced SKA's), Level III (pre-internship), and Level IV (summer internship) research academies.

  4. RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul D.

    2014-06-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  5. RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul David

    2015-10-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  6. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    PubMed

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  7. Hierarchical porous carbon/MnO2 hybrids as supercapacitor electrodes.

    PubMed

    Lee, Min Eui; Yun, Young Soo; Jin, Hyoung-Joon

    2014-12-01

    Hybrid electrodes of hierarchical porous carbon (HPC) and manganese oxide (MnO2) were synthesized using a fast surface redox reaction of potassium permanganate under facile immersion methods. The HPC/MnO2 hybrids had a number of micropores and macropores and the MnO2 nanoparticles acted as a pseudocapacitive material. The synergistic effects of electric double-layer capacitor (EDLC)-induced capacitance and pseudocapacitance brought about a better electrochemical performance of the HPC/MnO2 hybrid electrodes compared to that obtained with a single component. The hybrids showed a specific capacitance of 228 F g(-1) and good cycle stability over 1000 cycles.

  8. New Challenges of the Computation of Multiple Sequence Alignments in the High-Throughput Era (2010 JGI/ANL HPC Workshop)

    ScienceCinema

    Notredame, Cedric

    2018-05-02

    Cedric Notredame from the Centre for Genomic Regulation gives a presentation on New Challenges of the Computation of Multiple Sequence Alignments in the High-Throughput Era at the JGI/Argonne HPC Workshop on January 26, 2010.

  9. Performance of high performance concrete (HPC) in low pH and sulfate environment.

    DOT National Transportation Integrated Search

    2013-05-01

    The goal of this research is to determine the impact of low pH and sulfate environment on high-performance concrete (HPC) and if the current structural and materials specifications provide adequate protections for concrete structures to meet the 75-y...

  10. ARC-2010-ACD10-0020-073

    NASA Image and Video Library

    2010-02-10

    Lawrence Livermore National Labs (LLNL), Navistar and the Department of Energy conduct tests in the NASA Ames National Full-scale Aerodynamic Complex 80x120_foot wind tunnel. The LLNL project is aimed at aerodynamic truck and trailer devices that can reduce fuel consumption at highway speed by 10 percent. Smoke test demo.

  11. ARC-2010-ACD10-0020-065

    NASA Image and Video Library

    2010-02-10

    Lawrence Livermore National Labs (LLNL), Navistar and the Department of Energy conduct tests in the NASA Ames National Full-scale Aerodynamic Complex 80x120_foot wind tunnel. The LLNL project is aimed at aerodynamic truck and trailer devices that can reduce fuel consumption at highway speed by 10 percent. Smoke test demo.

  12. Wide Area Recovery and Resiliency Program (WARRP) Knowledge Enhancement Events: CBR Workshop After Action Report

    DTIC Science & Technology

    2012-01-01

    Laboratories Walker Ray Walker Engineering Solutions, LLC Williams Patricia Denver Office of Emergency Management Wood- Zika Annmarie Lawrence Livermore...llnl.gov AnnMarie Wood- Zika woodzika1@llnl.gov Pacific Northwest National Laboratory Ann Lesperance ann.lesperance@pnnl.gov Jessica Sandusky

  13. Fast Steering Mirror systems for the U-AVLIS program at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, J.; Avicola, K.; Payne, A.

    1994-07-01

    We have successfully deployed several fast steering mirror systems in the Uranium Atomic Vapor Isotope Separation (U-AVLIS) facility at LLNL. These systems employ 2 mm to 150 mm optics and piezoelectric actuators to achieve microradian pointing accuracy with disturbance rejection bandwidths to a few hundred hertz.

  14. Critical Homeland Infrastructure Protection

    DTIC Science & Technology

    2007-01-01

    talent. Examples include: * Detection of surveillance activities; * Stand-off detection of chemical, biological, nuclear, radiation and explosive ...Manager Guardian DARPA Overview Mr. Roger Gibbs DARPA LLNL Technologies in Support of Infrastructure Mr. Don Prosnitz LLNL Protection Sandia National...FP Antiterrorism/Force Protection CBRNE Chemical Biological Radiological Nuclear Explosive CERT Commuter Emergency Response Team CIA Central

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate themore » utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single-profile meteorology versus higher-fidelity three-dimensional gridded weather forecast for regional-scale analysis. Tradeoffs between computation time and the fidelity of the results are discussed for both scales. LES, for example, requires nearly 100 times more processor time than the mass-consistent diagnostic model or the RANS model, and seems better able to capture flow entrainment behind tall buildings. As anticipated, results obtained by LLNL and CEA at regional scale around Chicago and Paris look very similar in terms of both atmospheric dispersion of the radiological release and total effective dose. Both LLNL and CEA used the same meteorological data, Lagrangian particle dispersion models, and the same dose coefficients. LLNL and CEA urban-scale modeling results show consistent phenomenological behavior and predict similar impacted areas even though the detailed 3D flow patterns differ, particularly for the Chicago cases where differences in vertical entrainment behind tall buildings are particularly notable. Although RANS and LES (LLNL) models incorporate more detailed physics than do mass-consistent diagnostic flow models (CEA), it is not possible to reach definite conclusions about the prediction fidelity of the various models as experimental measurements were not available for comparison. Stronger conclusions about the relative performances of the models involved and evaluation of the tradeoffs involved in model simplification could be made with a systematic benchmarking of urban-scale modeling. This could be the purpose of a future US / French collaborative exercise.« less

  16. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  17. A General Purpose High Performance Linux Installation Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, Alf

    2002-06-17

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less

  18. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    PubMed

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  19. Differential regulation of NMDA receptor-expressing neurons in the rat hippocampus and striatum following bilateral vestibular loss demonstrated using flow cytometry.

    PubMed

    Benoit, Alice; Besnard, Stephane; Guillamin, Maryline; Philoxene, Bruno; Sola, Brigitte; Le Gall, Anne; Machado, Marie-Laure; Toulouse, Joseph; Hitier, Martin; Smith, Paul F

    2018-06-21

    There is substantial evidence that loss of vestibular function impairs spatial learning and memory related to hippocampal (HPC) function, as well as increasing evidence that striatal (Str) plasticity is also implicated. Since the N-methyl-D-aspartate (NMDA) subtype of glutamate receptor is considered essential to spatial memory, previous studies have investigated whether the expression of HPC NMDA receptors changes following vestibular loss; however, the results have been contradictory. Here we used a novel flow cytometric method to quantify the number of neurons expressing NMDA receptors in the HPC and Str following bilateral vestibular loss (BVL) in rats. At 7 and 30 days post-op., there was a significant increase in the number of HPC neurons expressing NMDA receptors in the BVL animals, compared to sham controls (P ≤ 0.004 and P ≤ 0.0001, respectively). By contrast, in the Str, at 7 days there was a significant reduction in the number of neurons expressing NMDA receptors in the BVL group (P ≤ 0.05); however, this difference had disappeared by 30 days post-op. These results suggest that BVL causes differential changes in the number of neurons expressing NMDA receptors in the HPC and Str, which may be related to its long-term impairment of spatial memory. Copyright © 2018. Published by Elsevier B.V.

  20. Is health screening beneficial for early detection and prognostic improvement in pancreatic cancer?

    PubMed

    Kim, Eun Ran; Bae, Sun Youn; Lee, Kwang Hyuk; Lee, Kyu Taek; Son, Hee Jung; Rhee, Jong Chul; Lee, Jong Kyun

    2011-06-01

    The aim of this study was to evaluate the usefulness of health screening for early detection and improved prognosis in pancreatic cancer. Between 1995 and 2008, 176,361 examinees visited the Health Promotion Center (HPC). Twenty patients diagnosed with pancreatic cancer were enrolled. During the same period, 40 patients were randomly selected from 2,202 patients diagnosed with pancreatic cancer at the Out Patient Clinic (OPC) for comparison. Within the HPC group, 10 patients were initially suspected of having pancreatic cancer following abnormal ultrasonographic findings, and 9 patients had suspected cases following the detection of elevated serum CA 19-9. The curative resection rate was higher in the HPC group than in the OPC group (p=0.011). The median survival was longer in the HPC group than in the OPC group (p=0.000). However, there was no significant difference in the 3-year survival rate between the two groups. Asymptomatic patients (n=6/20) in the HPC group showed better curative resection and survival rates than symptomatic patients. However, the difference was not statistically significant. Health screening is somewhat helpful for improving the curative resection rate and median survival of patients with pancreatic cancer detected by screening tests. However, the benefit of this method in improving long-term survival is limited by how early the cancer is detected.

  1. Hydrogen postconditioning promotes survival of rat retinal ganglion cells against ischemia/reperfusion injury through the PI3K/Akt pathway.

    PubMed

    Wu, Jiangchun; Wang, Ruobing; Yang, Dianxu; Tang, Wenbin; Chen, Zeli; Sun, Qinglei; Liu, Lin; Zang, Rongyu

    2018-01-22

    Retinal ischemia/reperfusion injury (IRI) plays a crucial role in the pathophysiology of various ocular diseases. Our previous study have shown that postconditioning with inhaled hydrogen (H 2 ) (HPC) can protect retinal ganglion cells (RGCs) in a rat model of retinal IRI. Our further study aims to investigate potential mechanisms underlying HPC-induced protection. Retinal IRI was performed on the right eyes of rats and was followed by inhalation of 67% H 2 mixed with 33% oxygen immediately after ischemia for 1 h daily for one week. RGC density was counted using haematoxylin and eosin (HE) staining, retrograde labelling with cholera toxin beta (CTB) and TUNEL staining, respectively. Visual function was assessed using flash visual evoked potentials (FVEP) and pupillary light reflex (PLR). The phosphorylated Akt was analysed by RT-PCR and western blot. The results showed that administration of HPC significantly inhibited the apoptosis of RGCs and protected the visual function. Simultaneously, HPC treatment markedly increased the phosphorylations of Akt. Blockade of PI3K activity by inhibitors (LY294002) dramatically abolished its anti-apoptotic effect and lowered both visual function and Akt phosphorylation levels. Taken together, our results demonstrate that HPC appears to confer neuroprotection against retinal IRI via the PI3K/Akt pathway. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    NASA Astrophysics Data System (ADS)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  3. Theta variation and spatiotemporal scaling along the septotemporal axis of the hippocampus

    PubMed Central

    Long, Lauren L.; Bunce, Jamie G.; Chrobak, James J.

    2015-01-01

    Hippocampal theta has been related to locomotor speed, attention, anxiety, sensorimotor integration and memory among other emergent phenomena. One difficulty in understanding the function of theta is that the hippocampus (HPC) modulates voluntary behavior at the same time that it processes sensory input. Both functions are correlated with characteristic changes in theta indices. The current review highlights a series of studies examining theta local field potential (LFP) signals across the septotemporal or longitudinal axis of the HPC. While the theta signal is coherent throughout the entirety of the HPC, the amplitude, but not the frequency, of theta varies significantly across its three-dimensional expanse. We suggest that the theta signal offers a rich vein of information about how distributed neuronal ensembles support emergent function. Further, we speculate that emergent function across the long axis varies with respect to spatiotemporal scale. Thus, septal HPC processes details of the proximal spatiotemporal environment while more temporal aspects process larger spaces and wider time-scales. The degree to which emergent functions are supported by the synchronization of theta across the septotemporal axis is an open question. Our working model is that theta synchrony serves to bind ensembles representing varying resolutions of spatiotemporal information at interdependent septotemporal areas of the HPC. Such synchrony and cooperative interactions along the septotemporal axis likely support memory formation and subsequent consolidation and retrieval. PMID:25852496

  4. Effect of crospovidone and hydroxypropyl cellulose on carbamazepine in high-dose tablet formulation.

    PubMed

    Flicker, Felicia; Betz, Gabriele

    2012-06-01

    The aim of this study was to develop a high-dose tablet formulation of the poorly soluble carbamazepine (CBZ) with sufficient tablet hardness and immediate drug release. A further aim was to investigate the influence of various commercial CBZ raw materials on the optimized tablet formulation. Hydroxypropyl cellulose (HPC-SL) was selected as a dry binder and crospovidone (CrosPVP) as a superdisintegrant. A direct compacted tablet formulation of 70% CBZ was optimized by a 3² full factorial design with two input variables, HPC (0--10%) and CrosPVP (0--5%). Response variables included disintegration time, amount of drug released at 15 and 60 min, and tablet hardness, all analyzed according to USP 31. Increasing HPC-SL together with CrosPVP not only increased tablet hardness but also reduced disintegration time. Optimal condition was achieved in the range of 5--9% HPC and 3--5% CrosPVP, where tablet properties were at least 70 N tablet hardness, less than 1 min disintegration, and within the USP requirements for drug release. Testing the optimized formulation with four different commercial CBZ samples, their variability was still observed. Nonetheless, all formulations conformed to the USP specifications. With the excipients CrosPVP and HPC-SL an immediate release tablet formulation was successfully formulated for high-dose CBZ of various commercial sources.

  5. Contact Us | High-Performance Computing | NREL

    Science.gov Websites

    Select Peregrine Merlin WinHPC Allocation project handle (if requesting HPC account) Description of "SEND REQUEST" and nothing happens, it most likely means you forgot to provide information in a required field. You may need to scroll up to see what required information is missing

  6. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  7. WinHPC System Programming | High-Performance Computing | NREL

    Science.gov Websites

    Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running

  8. Process for Managing and Customizing HPC Operating Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, David ML

    2014-04-02

    A process for maintaining a custom HPC operating system was developed at the Environmental Molecular Sciences Laboratory (EMSL) over the past ten years. This process is generic and flexible to manage continuous change as well as keep systems updated while managing communication through well defined pieces of software.

  9. Advanced Biomedical Computing Center (ABCC) | DSITP

    Cancer.gov

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  10. National Energy Research Scientific Computing Center

    Science.gov Websites

    Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders Usage and User HPC Requirements Reviews NERSC HPC Achievement Awards User Submitted Research Citations NERSC User data archive NERSC Resources Table For Users Live Status User Announcements My NERSC Getting Started

  11. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  12. End-of-life care for immigrants in Germany. An epidemiological appraisal of Berlin

    PubMed Central

    Behzadi, Asita

    2017-01-01

    Background Since the late 1950’s, a steadily increasing immigrant population in Germany is resulting in a subpopulation of aging immigrants. The German health care system needs to adjust its services—linguistically, culturally, and medically–for this subpopulation of patients. Immigrants make up over 20% of the population in Germany, yet the majority receive inadequate medical care. As many of the labor immigrants of the 1960s and 1970s are in need of hospice and palliative care (HPC), little is known about this specialized care for immigrants. This epidemiological study presents utilization of HPC facilities in Berlin with a focus on different immigrant groups. Methods A validated questionnaire was used to collect data from patients at 34 HPC institutions in Berlin over 20 months. All newly admitted patients were recruited. Anonymized data were coded and analyzed by using SPSS and compared with the population statistics of Berlin. Results 4118 questionnaires were completed and included in the analysis. At 11.4% the proportion of immigrants accessing HPC was significantly (p<0,001) below their proportion in the general Berlin population. This difference was especially seen in the age groups of 51–60 (21.46% immigrants in Berlin population, 17.7% immigrants in HPC population) and 61–70 years (16,9% vs. 13,1%). The largest ethnic groups are Turks, Russians, and Poles, with a different weighting than in the general population: Turkish immigrants were 24% of all Berlin immigrants, but only 13.6% of the study immigrant population (OR: 0.23, 95%CI: 0.18–0.29, p<0.001). Russian and Polish immigrants account for 5.6% and 9.2% in the population, but 11.5% and 24.8% in the study population respectively (Russian: OR 0.88, 95%CI: 0.66–1.16; Polish: OR 1.17, 95%CI: 0.97–1.42). Palliative care wards (PC) were used most often (16.7% immigrants of all PC patients); outpatient hospice services were used least often by immigrants (11.4%). Median age at first admission to HPC was younger in immigrants than non-immigrants: 61–70 vs. 71–80, p = 0.03. Conclusions Immigrants are underrepresented in Berlin´s HPC and immigrants on average make use of care at a younger age than non-immigrants. In this regard, Turkish immigrants in particular have the poorest utilization of HPC. These results should prompt research on Turkish immigrants, regarding access barriers, since they represent the largest immigrant group. This may be due to a lack of cultural sensitivity of the care-providers and a lack of knowledge about HPC among immigrants. In the comparison of the kinds of institutions, immigrants are less likely to access outpatient hospice services compared to PC. Apparently, PC appear to be a smaller hurdle for utilization. These results show a non-existent, but oft-cited “healthy immigrant effect” of the first generation of work immigrants, now entering old age. These findings correspond with studies suggesting increased health concerns in immigrants. Focused research is needed to promote efforts in providing adequate and fair access to HPC for all people in Berlin. PMID:28763469

  13. An integrated pipeline of open source software adapted for multi-CPU architectures: use in the large-scale identification of single nucleotide polymorphisms.

    PubMed

    Jayashree, B; Hanspal, Manindra S; Srinivasan, Rajgopal; Vigneshwaran, R; Varshney, Rajeev K; Spurthi, N; Eshwar, K; Ramesh, N; Chandra, S; Hoisington, David A

    2007-01-01

    The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs). In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.

  14. Grid-based HPC astrophysical applications at INAF Catania.

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.

    The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.

  15. [Study for lung sound acquisition module based on ARM and Linux].

    PubMed

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verce, M. F.; Schwartz, L. I.

    This was a collaborative effort between LLNL and STE to investigate the use of vaporized hydrogen peroxide (VHP®) to decontaminate spore-contaminated heating, ventilation, and cooling (HV AC) systems in a trailer sized room. LLNL's effort under this CRADA was funded by DOE's Chemical and Biological National Security Program (CBNP), which later became part of Department of Homeland Security in 2004.

  17. The Next Linear Collider Program

    Science.gov Websites

    Navbar Other Address Books: Laboratory Phone/Email Web Directory SLAC SLAC Phonebook Entire SLAC Web FNAL Telephone Directory Fermilab Search LLNL Phone Book LLNL Web Servers LBNL Directory Services Web Search: A-Z Index KEK E-mail Database Research Projects NLC Website Search: Entire SLAC Web | Help

  18. ARC-2010-ACD10-0020-013

    NASA Image and Video Library

    2010-01-14

    Lawrence Livermore National Labs (LLNL), Navistar and the Department of Energy conduct tests in the NASA Ames National Full-scale Aerodynamic Complex 80x120_foot wind tunnel. The LLNL project is aimed at aerodynamic truck and trailer devices that can reduce fuel consumption at highway speed by 10 percent. Cab being lifted into the tunnel.

  19. ARC-2010-ACD10-0020-023

    NASA Image and Video Library

    2010-02-03

    Lawrence Livermore National Labs (LLNL), Navistar and the Department of Energy conduct tests in the NASA Ames National Full-scale Aerodynamic Complex 80x120_foot wind tunnel. The LLNL project is aimed at aerodynamic truck and trailer devices that can reduce fuel consumption at highway speed by 10 percent. Trailer being lifted into the tunnel.

  20. Automated System for Aneuploidy Detection in Sperm Final Report CRADA No. TC-1364-96: Phase I SBIR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wyrobek, A. J.; Dunlay, R. T.

    This project was a relationship between Lawrence Livermore National Laboratory (LLNL) and Biological Detection, Inc. (now known as Cellomics, Inc.) It was funded as a Phase I SBIR from the National Institutes of Health (NIH) awarded to Cellomics, Inc. with a subcontract to LLNL.

  1. ARC-2010-ACD10-0020-082

    NASA Image and Video Library

    2010-02-10

    Lawrence Livermore National Labs (LLNL), Navistar and the Department of Energy conduct tests in the NASA Ames National Full-scale Aerodynamic Complex 80x120_foot wind tunnel. The LLNL project is aimed at aerodynamic truck and trailer devices that can reduce fuel consumption at highway speed by 10 percent. Smoke test demo with Ron Schoon, Navistar.

  2. ARC-2010-ACD10-0020-079

    NASA Image and Video Library

    2010-02-10

    Lawrence Livermore National Labs (LLNL), Navistar and the Department of Energy conduct tests in the NASA Ames National Full-scale Aerodynamic Complex 80x120_foot wind tunnel. The LLNL project is aimed at aerodynamic truck and trailer devices that can reduce fuel consumption at highway speed by 10 percent. Smoke test demo with Ron Schoon, Navistar.

  3. Trip Report United Arab Emirates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, K; Rodgers, A

    2004-10-06

    Keith Nakanishi and Arthur Rodgers traveled to the United Arab Emirates in February, 2004 to continue an on-going technical collaboration with UAE University and to service the two temporary LLNL seismic stations. Nakanishi and Rodgers then participated in the Gulf Seismic Forum, which was organized by LLNL and sponsored by the University of Sharjah.

  4. Development of Diagnostics for the Livermore DPF Devices

    NASA Astrophysics Data System (ADS)

    Mitrani, James; Prasad, Rahul R.; Podpaly, Yuri A.; Cooper, Christopher M.; Chapman, Steven F.; Shaw, Brian H.; Povilus, Alexander P.; Schmidt, Andrea

    2017-10-01

    LLNL is commissioning several new diagnostics to understand and optimize ion and neutron production in their dense plasma focus (DPF) systems. Gas fills used in DPF devices at LLNL are Deuterium (D2) and He accelerated onto a Be target, for production of neutrons. Neutron yields are currently measured with Helium-3 tubes, and development of yttrium-based activation detectors is currently underway. Neutron time-of-flight (nTOF) signals from prompt neutrons will be measured with gadolinium-doped liquid scintillators. An ion energy analyzer will be used to diagnose energy distribution of D + and He +2 ions. Additionally, a fast frame ICCD camera has been applied to image the plasma sheath during the rundown and pinch phases. Sheath velocity will be measured with an array of discrete photodiodes with ns time responses. A discussion of our results will be presented. Prepared by LLNL under Contract DE-AC52-07NA27344, and supported by the Laboratory Directed Research and Development Program (15-ERD-034) at LLNL and the Office of Defense Nuclear Nonproliferation Research and Development within U.S. Department of Energy.

  5. Computer Security Awareness Guide for Department of Energy Laboratories, Government Agencies, and others for use with Lawrence Livermore National Laboratory`s (LLNL): Computer security short subjects videos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Lonnie Moore, the Computer Security Manager, CSSM/CPPM at Lawrence Livermore National Laboratory (LLNL) and Gale Warshawsky, the Coordinator for Computer Security Education & Awareness at LLNL, wanted to share topics such as computer ethics, software piracy, privacy issues, and protecting information in a format that would capture and hold an audience`s attention. Four Computer Security Short Subject videos were produced which ranged from 1-3 minutes each. These videos are very effective education and awareness tools that can be used to generate discussions about computer security concerns and good computing practices. Leaders may incorporate the Short Subjects into presentations. After talkingmore » about a subject area, one of the Short Subjects may be shown to highlight that subject matter. Another method for sharing them could be to show a Short Subject first and then lead a discussion about its topic. The cast of characters and a bit of information about their personalities in the LLNL Computer Security Short Subjects is included in this report.« less

  6. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  7. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  8. Synchronous Papillary Carcinoma and Hemangiopericytoma with Lung Metastases

    PubMed Central

    Malagutti, Nicola; Iannini, Valeria; Rocchi, Andrea; Stomeo, Francesco; Frassoldati, Antonio; Borin, Michela; Pelucchi, Stefano

    2013-01-01

    Hemangiopericytomas (HPC) are uncommon tumors that originate from perivascular cells of capillary vessels. HPC are about 1% of all vascular tumors and can be found in the head-neck region with an incidence between 16% and 33%. HPC is a neoplasm of uncertain malignant potential; it can behave as an aggressive tumor with metastases and increased mitotic activity or as a relatively benign neoplasm with only local development. In this paper we describe a case of hemangiopericytoma with uncertain malignant potential with cervical location associated with a concomitant papillary thyroid carcinoma and lung metastasis of unknown origin; this case led us to follow a specific and uncommon diagnostic and therapeutic strategy. PMID:24368958

  9. Current Best Practices for Sexual and Gender Minorities in Hospice and Palliative Care Settings.

    PubMed

    Maingi, Shail; Bagabag, Arthur E; O'Mahony, Sean

    2018-05-01

    Although several publications document the health care disparities experienced by sexual and gender minorities (SGMs), including lesbian, gay, bisexual, and transgender (LGBT) individuals,1e4 less is known about the experiences and outcomes for SGM families and individuals in hospice and palliative care (HPC) settings. This article provides a brief overview of issues pertaining to SGMs in HPC settings, highlighting gaps in knowledge and research. Current and best practices for SGM individuals and their families in HPC settings are described, as are recommendations for improving the quality of such care. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  10. Role of adult neurogenesis in hippocampal-cortical memory consolidation

    PubMed Central

    2014-01-01

    Acquired memory is initially dependent on the hippocampus (HPC) for permanent memory formation. This hippocampal dependency of memory recall progressively decays with time, a process that is associated with a gradual increase in dependency upon cortical structures. This process is commonly referred to as systems consolidation theory. In this paper, we first review how memory becomes hippocampal dependent to cortical dependent with an emphasis on the interactions that occur between the HPC and cortex during systems consolidation. We also review the mechanisms underlying the gradual decay of HPC dependency during systems consolidation from the perspective of memory erasures by adult hippocampal neurogenesis. Finally, we discuss the relationship between systems consolidation and memory precision. PMID:24552281

  11. Screening Program Reduced Melanoma Mortality at the Lawrence Livermore National Laboratory, 1984-1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, MD, J S; II, PhD, D; MD, PhD, M

    Worldwide incidence of cutaneous malignant melanoma has increased substantially, and no screening program has yet demonstrated reduction in mortality. We evaluated the education, self examination and targeted screening campaign at the Lawrence Livermore National Laboratory (LLNL) from its beginning in July 1984 through 1996. The thickness and crude incidence of melanoma from the years before the campaign were compared to those obtained during the 13 years of screening. Melanoma mortality during the 13-year period was based on a National Death Index search. Expected yearly deaths from melanoma among LLNL employees were calculated by using California mortality data matched by age,more » sex, and race/ethnicity and adjusted to exclude deaths from melanoma diagnosed before the program began or before employment at LLNL. After the program began, crude incidence of melanoma thicker than 0.75 mm decreased from 18 to 4 cases per 100,000 person-years (p = 0.02), while melanoma less than 0.75mm remained stable and in situ melanoma increased substantially. No eligible melanoma deaths occurred among LLNL employees during the screening period compared with a calculated 3.39 expected deaths (p = 0.034). Education, self examination and selective screening for melanoma at LLNL significantly decreased incidence of melanoma thicker than 0.75 mm and reduced the melanoma-related mortality rate to zero. This significant decrease in mortality rate persisted for at least 3 yr after employees retired or otherwise left the laboratory.« less

  12. Training and qualification of health and safety technicians at a national laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egbert, W.F.; Trinoskey, P.A.

    1994-10-01

    Over the last 30 years, Lawrence Livermore National Laboratory (LLNL) has successfully implemented the concept of a multi-disciplined technician. LLNL Health and Safety Technicians have responsibilities in industrial hygiene, industrial safety, health physics, as well as fire, explosive, and criticality safety. One of the major benefits to this approach is the cost-effective use of workers who display an ownership of health and safety issues which is sometimes lacking when responsibilities are divided. Although LLNL has always promoted the concept of a multi-discipline technician, this concept is gaining interest within the Department of Energy (DOE) community. In November 1992, individuals frommore » Oak Ridge Institute of Science and Education (ORISE) and RUST Geotech, joined by LLNL established a committee to address the issues of Health and Safety Technicians. In 1993, the DOE Office of Environmental, Safety and Health, in response to the Defense Nuclear Facility Safety Board Recommendation 91-6, stated DOE projects, particularly environmental restoration, typically present hazards other than radiation such as chemicals, explosives, complex construction activities, etc., which require additional expertise by Radiological Control Technicians. They followed with a commitment that a training guide would be issued. The trend in the last two decades has been toward greater specialization in the areas of health and safety. In contrast, the LLNL has moved toward a generalist approach integrating the once separate functions of the industrial hygiene and health physics technician into one function.« less

  13. Historic Context and Building Assessments for the Lawrence Livermore National Laboratory Built Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ullrich, R. A.; Sullivan, M. A.

    2007-09-14

    This document was prepared to support u.s. Department of Energy / National Nuclear Security Agency (DOE/NNSA) compliance with Sections 106 and 110 of the National Historic Preservation Act (NHPA). Lawrence Livermore National Laboratory (LLNL) is a DOE/NNSA laboratory and is engaged in determining the historic status of its properties at both its main site in Livermore, California, and Site 300, its test site located eleven miles from the main site. LLNL contracted with the authors via Sandia National Laboratories (SNL) to prepare a historic context statement for properties at both sites and to provide assessments of those properties of potentialmore » historic interest. The report contains an extensive historic context statement and the assessments of individual properties and groups of properties determined, via criteria established in the context statement, to be of potential interest. The historic context statement addresses the four contexts within which LLNL falls: Local History, World War II History (WWII), Cold War History, and Post-Cold War History. Appropriate historic preservation themes relevant to LLNL's history are delineated within each context. In addition, thresholds are identified for historic significance within each of the contexts based on the explication and understanding of the Secretary of the Interior's Guidelines for determining eligibility for the National Register of Historic Places. The report identifies specific research areas and events in LLNL's history that are of interest and the portions of the built environment in which they occurred. Based on that discussion, properties of potential interest are identified and assessments of them are provided. Twenty individual buildings and three areas of potential historic interest were assessed. The final recommendation is that, of these, LLNL has five individual historic buildings, two sets of historic objects, and two historic districts eligible for the National Register. All are eligible within the Cold War History context. They are listed in the table below, along with the Cold War preservation theme, period of significance, and criterion under which they are eligible.« less

  14. LLNL-G3Dv3: Global P wave tomography model for improved regional and teleseismic travel time prediction: LLNL-G3DV3---GLOBAL P WAVE TOMOGRAPHY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, N. A.; Myers, S. C.; Johannesson, G.

    [1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trendsmore » as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.« less

  15. Non-Invasive Pneumothorax Detector Final Report CRADA No. TC02110.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J. T.; Purcell, R.

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and ElectroSonics Medical Inc. (formerly known as BIOMEC, Inc.), to develop a non-invasive pneumothorax detector based upon the micropower impulse radar technology invented at LLNL. Under a Work for Others Subcontract (L-9248), LLNL and ElectroSonics successfully demonstrated the feasibility of a novel device for non-invasive detection of pneumothorax for emergency and long-term monitoring. The device is based on Micropower Impulse Radar (MIR) Ultra Wideband (UWB) technology. Phase I experimental results were promising, showing that a pneumothorax volume even asmore » small as 30 ml was clearly detectable from the MIR signals. Phase I results contributed to the award of a National Institute of Health (NIH) SBIR Phase II grant to support further research and development. The Phase II award led to the establishment of a LLNL/ElectroSonics CRADA related to Case No. TC02045.0. Under the subsequent CRADA, LLNL and ElectroSonics successfully demonstrated the feasibility of the pneumothorax detection in human subject research trials. Under this current CRADA TC02110.0, also referred to as Phase II Type II, the project scope consisted of seven tasks in Project Year 1; five tasks in Project Year 2; and four tasks in Project Year 3. Year 1 tasks were aimed toward the delivery of the pneumothorax detector design package for the pre-production of the miniaturized CompactFlash dockable version of the system. The tasks in Project Years 2 and 3 critically depended upon the accomplishments of Task 1. Since LLNL’s task was to provide subject matter expertise and performance verification, much of the timeline of engagement by the LLNL staff depended upon the overall project milestones as determined by the lead organization ElectroSonics. The scope of efforts were subsequently adjusted accordingly to commensurate with funding availability.« less

  16. Lawrence Livermore National Laboratory Environmental Report 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Henry E.; Armstrong, Dave; Blake, Rick G.

    Lawrence Livermore National Laboratory (LLNL) is a premier research laboratory that is part of the National Nuclear Security Administration (NNSA) within the U.S. Department of Energy (DOE). As a national security laboratory, LLNL is responsible for ensuring that the nation’s nuclear weapons remain safe, secure, and reliable. The Laboratory also meets other pressing national security needs, including countering the proliferation of weapons of mass destruction and strengthening homeland security, and conducting major research in atmospheric, earth, and energy sciences; bioscience and biotechnology; and engineering, basic science, and advanced technology. The Laboratory is managed and operated by Lawrence Livermore National Security,more » LLC (LLNS), and serves as a scientific resource to the U.S. government and a partner to industry and academia. LLNL operations have the potential to release a variety of constituents into the environment via atmospheric, surface water, and groundwater pathways. Some of the constituents, such as particles from diesel engines, are common at many types of facilities while others, such as radionuclides, are unique to research facilities like LLNL. All releases are highly regulated and carefully monitored. LLNL strives to maintain a safe, secure and efficient operational environment for its employees and neighboring communities. Experts in environment, safety and health (ES&H) support all Laboratory activities. LLNL’s radiological control program ensures that radiological exposures and releases are reduced to as low as reasonably achievable to protect the health and safety of its employees, contractors, the public, and the environment. LLNL is committed to enhancing its environmental stewardship and managing the impacts its operations may have on the environment through a formal Environmental Management System. The Laboratory encourages the public to participate in matters related to the Laboratory’s environmental impact on the community by soliciting citizens’ input on matters of significant public interest and through various communications. The Laboratory also provides public access to information on its ES&H activities. LLNL consists of two sites—an urban site in Livermore, California, referred to as the “Livermore Site,” which occupies 1.3 square miles; and a rural Experimental Test Site, referred to as “Site 300,” near Tracy, California, which occupies 10.9 square miles. In 2012 the Laboratory had a staff of approximately 7000.« less

  17. Lawrence Livermore National Laboratory Environmental Report 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H. E.; Bertoldo, N. A.; Blake, R. G.

    Lawrence Livermore National Laboratory (LLNL) is a premier research laboratory that is part of the National Nuclear Security Administration (NNSA) within the U.S. Department of Energy (DOE). As a national security laboratory, LLNL is responsible for ensuring that the nation’s nuclear weapons remain safe, secure, and reliable. The Laboratory also meets other pressing national security needs, including countering the proliferation of weapons of mass destruction and strengthening homeland security, and conducting major research in atmospheric, earth, and energy sciences; bioscience and biotechnology; and engineering, basic science, and advanced technology. The Laboratory is managed and operated by Lawrence Livermore National Security,more » LLC (LLNS), and serves as a scientific resource to the U.S. government and a partner to industry and academia. LLNL operations have the potential to release a variety of constituents into the environment via atmospheric, surface water, and groundwater pathways. Some of the constituents, such as particles from diesel engines, are common at many types of facilities while others, such as radionuclides, are unique to research facilities like LLNL. All releases are highly regulated and carefully monitored. LLNL strives to maintain a safe, secure and efficient operational environment for its employees and neighboring communities. Experts in environment, safety and health (ES&H) support all Laboratory activities. LLNL’s radiological control program ensures that radiological exposures and releases are reduced to as low as reasonably achievable to protect the health and safety of its employees, contractors, the public, and the environment. LLNL is committed to enhancing its environmental stewardship and managing the impacts its operations may have on the environment through a formal Environmental Management System. The Laboratory encourages the public to participate in matters related to the Laboratory’s environmental impact on the community by soliciting citizens’ input on matters of significant public interest and through various communications. The Laboratory also provides public access to information on its ES&H activities. LLNL consists of two sites—an urban site in Livermore, California, referred to as the “Livermore Site,” which occupies 1.3 square miles; and a rural Experimental Test Site, referred to as “Site 300,” near Tracy, California, which occupies 10.9 square miles. In 2013 the Laboratory had a staff of approximately 6,300.« less

  18. Development and construction of low-cracking high-performance concrete (LC-HPC) bridge decks : free shrinkage, moisture optimization and concrete production : final report.

    DOT National Transportation Integrated Search

    2009-08-01

    The development and evaluation of low-cracking high-performance concrete (LC-HPC) for use in bridge decks : is described based on laboratory test results and experience gained during the construction of 14 bridges. This report : emphasizes the materi...

  19. NREL Evaluates Aquarius Liquid-Cooled High-Performance Computing Technology

    Science.gov Websites

    HPC and influence the modern data center designer towards adoption of liquid cooling. Our shared technology. Aquila and Sandia chose NREL's HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling, along with the required instrumentation to

  20. User Account Passwords | High-Performance Computing | NREL

    Science.gov Websites

    Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set

  1. Expanding HPC and Research Computing--The Sustainable Way

    ERIC Educational Resources Information Center

    Grush, Mary

    2009-01-01

    Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…

  2. HPC Aspects of Variable-Resolution Global Climate Modeling using a Multi-scale Convection Parameterization

    EPA Science Inventory

    High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...

  3. Data Security Policy | High-Performance Computing | NREL

    Science.gov Websites

    to use its high-performance computing (HPC) systems. NREL HPC systems are operated as research systems and may only contain data related to scientific research. These systems are categorized as low per sensitive or non-sensitive. One example of sensitive data would be personally identifiable information (PII

  4. Creep Shrinkage and CTE Evaluation: MoDOT's New Bridge Deck Mix Companion Testing to HPC Bridge Deck.

    DOT National Transportation Integrated Search

    2005-02-01

    MoDOT RDT Research Project R-I00-002 HPC for Bridge A6130 Route 412 Pemiscot County was recently completed in June of 2004 [Myers and Yang, 2004]. Among other research tasks, part of this research study investigated the creep, shrinkage and...

  5. Development and construction of low-cracking high-performance concrete (LC-HPC) bridge decks : free shrinkage, moisture optimization and concrete production : summary report.

    DOT National Transportation Integrated Search

    2009-08-01

    The development and evaluation of low-cracking high-performance concrete (LC-HPC) for use in bridge decks : is described based on laboratory test results and experience gained during the construction of 14 bridges. This report : emphasizes the materi...

  6. Shared Storage Usage Policy | High-Performance Computing | NREL

    Science.gov Websites

    Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project

  7. Business Models of High Performance Computing Centres in Higher Education in Europe

    ERIC Educational Resources Information Center

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  8. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  9. Scaling GDL for Multi-cores to Process Planck HFI Beams Monte Carlo on HPC

    NASA Astrophysics Data System (ADS)

    Coulais, A.; Schellens, M.; Duvert, G.; Park, J.; Arabas, S.; Erard, S.; Roudier, G.; Hivon, E.; Mottet, S.; Laurent, B.; Pinter, M.; Kasradze, N.; Ayad, M.

    2014-05-01

    After reviewing the majors progress done in GDL -now in 0.9.4- on performance and plotting capabilities since ADASS XXI paper (Coulais et al. 2012), we detail how a large code for Planck HFI beams Monte Carlo was successfully transposed from IDL to GDL on HPC.

  10. CFD Ventilation Study for the Human Powered Centrifuge at the International Space Station

    NASA Technical Reports Server (NTRS)

    Son, Chang H.

    2011-01-01

    The Human Powered Centrifuge (HPC) is a hyper gravity facility that will be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a bicycle for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of several times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin. The 3D computational model included PMM cabin. The full unsteady formulation was used for airflow and CO2 transport modeling with the so-called sliding mesh concept is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution is detected and discussed.

  11. Investigation of vasculogenic mimicry in intracranial hemangiopericytoma.

    PubMed

    Zhang, Zhen; Han, Yun; Zhang, Keke; Teng, Liangzhu

    2011-01-01

    Vasculogenic mimicry (VM) has increasingly been recognized as a form of angiogenesis. Previous studies have shown that the existence of VM is associated with poor clinical prognosis in certain malignant tumors. However, whether VM is present and clinically significant in intracranial hemangiopericytoma (HPC) is unknown. The present study was therefore designed to examine the expression of VM in intracranial HPC and its correlation with matrix metalloprotease-2 (MMP-2) and vascular endothelial growth factor (VEGF). A total of 17 intracranial HPC samples, along with complete clinical and pathological data, were collected for our study. Immunohistochemistry was performed to stain tissue sections for CD34, periodic acid-Schiff, VEGF and MMP-2. The levels of VEGF and MMP-2 were compared between tumor samples with and without VM. The results showed that VM existed in 12 of 17 (70.6%) intracranial HPC samples. The presence of VM in tumors was associated with tumor recurrence (P<0.05) and expression of MMP-2 (P<0.05). However, there was no difference in the expression of VEGF between groups with and without VM.

  12. CD271 Defines a Stem Cell-Like Population in Hypopharyngeal Cancer

    PubMed Central

    Imai, Takayuki; Tamai, Keiichi; Oizumi, Sayuri; Oyama, Kyoko; Yamaguchi, Kazunori; Sato, Ikuro; Satoh, Kennichi; Matsuura, Kazuto; Saijo, Shigeru; Sugamura, Kazuo; Tanaka, Nobuyuki

    2013-01-01

    Cancer stem cells contribute to the malignant phenotypes of a variety of cancers, but markers to identify human hypopharyngeal cancer (HPC) stem cells remain poorly understood. Here, we report that the CD271+ population sorted from xenotransplanted HPCs possesses an enhanced tumor-initiating capability in immunodeficient mice. Tumors generated from the CD271+ cells contained both CD271+ and CD271− cells, indicating that the population could undergo differentiation. Immunohistological analyses of the tumors revealed that the CD271+ cells localized to a perivascular niche near CD34+ vasculature, to invasive fronts, and to the basal layer. In accordance with these characteristics, a stemness marker, Nanog, and matrix metalloproteinases (MMPs), which are implicated in cancer invasion, were significantly up-regulated in the CD271+ compared to the CD271− cell population. Furthermore, using primary HPC specimens, we demonstrated that high CD271 expression was correlated with a poor prognosis for patients. Taken together, our findings indicate that CD271 is a novel marker for HPC stem-like cells and for HPC prognosis. PMID:23626764

  13. Exploring the capabilities of support vector machines in detecting silent data corruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo

    As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less

  14. Exploring the capabilities of support vector machines in detecting silent data corruptions

    DOE PAGES

    Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo; ...

    2018-02-01

    As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less

  15. User-level framework for performance monitoring of HPC applications

    NASA Astrophysics Data System (ADS)

    Hristova, R.; Goranov, G.

    2013-10-01

    HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.

  16. AMD3100 ameliorates cigarette smoke-induced emphysema-like manifestations in mice.

    PubMed

    Barwinska, Daria; Oueini, Houssam; Poirier, Christophe; Albrecht, Marjorie E; Bogatcheva, Natalia V; Justice, Matthew J; Saliba, Jacob; Schweitzer, Kelly S; Broxmeyer, Hal E; March, Keith L; Petrache, Irina

    2018-05-10

    We have shown that cigarette smoke (CS)-induced pulmonary emphysema-like manifestations are preceded by marked suppression of the number and function of bone marrow hematopoietic progenitor cells (HPC). To investigate if a limited availability of HPC may contribute to CS-induced lung injury, we used an FDA-approved antagonist of the interactions of SDF-1 with its receptor CXCR4 to promote intermittent HPC mobilization and tested its ability to limit emphysema-like injury following chronic CS. We administered AMD3100 (5mg/kg) to mice during a chronic CS exposure protocol of up to 24 weeks. AMD3100 treatment did not affect either lung SDF-1 levels, which were reduced by CS, or lung inflammatory cell counts. However, AMD3100 markedly improved CS-induced bone marrow HPC suppression and significantly ameliorated emphysema-like endpoints such as alveolar airspace size, lung volumes, and lung static compliance. These results suggest that antagonism of SDF-1 binding to CXCR4 is associated with protection of both bone marrow and lungs during chronic CS exposure, thus encouraging future studies of potential therapeutic benefit of AMD3100 in emphysema.

  17. Integrating the Apache Big Data Stack with HPC for Big Data

    NASA Astrophysics Data System (ADS)

    Fox, G. C.; Qiu, J.; Jha, S.

    2014-12-01

    There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.

  18. Neurocircuitry of fear extinction in adult and juvenile rats.

    PubMed

    Ganella, Despina E; Nguyen, Ly Dao; Lee-Kardashyan, Luba; Kim, Leah E; Paolini, Antonio G; Kim, Jee Hyun

    2018-06-10

    In contrast to adult rodents, juvenile rodents fail to show relapse following extinction of conditioned fear. Using different retrograde tracers injected into the infralimbic cortex (IL) and the ventral hippocampus (vHPC) in conjunction with c-Fos and parvalbumin (PV) immunochemistry, we investigated the neurocircuitry of extinction in juvenile and adult rats. Regardless of fear extinction or retrieval, juvenile rats had more c-Fos+ neurons in the basolateral amygdala (BLA) compared to adults, and showed a higher proportion of c-Fos+ IL-projecting neurons. Adult rats had more activated vHPC-projecting BLA neurons following extinction compared to retrieval, a difference not observed in juvenile rats. The number of activated vHPC- or IL-projecting BLA neurons was significantly correlated with freezing levels in adult, but not juvenile, rats. We also identified neurons in the BLA that simultaneously project to the IL and vHPC activated in the retrieval groups at both ages. This study provides novel insight into the neural process underlying extinction, especially in the juvenile period. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  20. Combined Performance of Polypropylene Fibre and Weld Slag in High Performance Concrete

    NASA Astrophysics Data System (ADS)

    Ananthi, A.; Karthikeyan, J.

    2017-12-01

    The effect of polypropylene fibre and weld slag on the mechanical properties of High Performance Concrete (HPC) containing silica fume as the mineral admixtures was experimentally verified in this study. Sixteen series of HPC mixtures(70 MPa) were designed with varying fibre fractions and Weld Slag (WS). Fibre added at different proportion (0, 0.1, 0.3 and 0.6%) to the weight of cement. Weld slag was substituted to the fine aggregate (0, 10, 20 and 30%) at volume. The addition of fibre decreases the slump at 5, 9 and 14%, whereas the substitution of weld slag decreases by about 3, 11 and 21% with respect to the control mixture. Mechanical properties like compressive strength, split tensile strength, flexural strength, Ultrasonic Pulse Velocity test (UPV) and bond strength were tested. Durability studies such as Water absorption and Sorptivity test were conducted to check the absorption of water in HPC. Weld slag of 10% and fibre dosage of 0.3% in HPC, attains the maximum strength and hence this combination is most favourable for the structural applications.

  1. Amygdala inputs to the ventral hippocampus bidirectionally modulate social behavior.

    PubMed

    Felix-Ortiz, Ada C; Tye, Kay M

    2014-01-08

    Impairments in social interaction represent a core symptom of a number of psychiatric disease states, including autism, schizophrenia, depression, and anxiety. Although the amygdala has long been linked to social interaction, little is known about the functional role of connections between the amygdala and downstream regions in noncompetitive social behavior. In the present study, we used optogenetic and pharmacological tools in mice to study the role of projections from the basolateral complex of the amygdala (BLA) to the ventral hippocampus (vHPC) in two social interaction tests: the resident-juvenile-intruder home-cage test and the three chamber sociability test. BLA pyramidal neurons were transduced using adeno-associated viral vectors (AAV5) carrying either channelrhodopsin-2 (ChR2) or halorhodopsin (NpHR), under the control of the CaMKIIα promoter to allow for optical excitation or inhibition of amygdala axon terminals. Optical fibers were chronically implanted to selectively manipulate BLA terminals in the vHPC. NpHR-mediated inhibition of BLA-vHPC projections significantly increased social interaction in the resident-juvenile intruder home-cage test as shown by increased intruder exploration. In contrast, ChR2-mediated activation of BLA-vHPC projections significantly reduced social behaviors as shown in the resident-juvenile intruder procedure as seen by decreased time exploring the intruder and in the three chamber sociability test by decreased time spent in the social zone. These results indicate that BLA inputs to the vHPC are capable of modulating social behaviors in a bidirectional manner.

  2. Human Adipose-derived Stem Cells Ameliorate Cigarette Smoke-induced Murine Myelosuppression via TSG-6

    PubMed Central

    Xie, Jie; Broxmeyer, Hal E.; Feng, Dongni; Schweitzer, Kelly S.; Yi, Ru; Cook, Todd G.; Chitteti, Brahmananda R.; Barwinska, Daria; Traktuev, Dmitry O.; Van Demark, Mary J.; Justice, Matthew J.; Ou, Xuan; Srour, Edward F.; Prockop, Darwin J.; Petrache, Irina; March, Keith L.

    2015-01-01

    Objective Bone marrow-derived hematopoietic stem and progenitor cells (HSC/HPC) are critical to homeostasis and tissue repair. The aims of this study were to delineate the myelotoxicity of cigarette smoking (CS) in a murine model, to explore human adipose-derived stem cells (hASC) as a novel approach to mitigate this toxicity, and to identify key mediating factors for ASC activities. Methods C57BL/6 mice were exposed to CS with or without i.v. injection of regular or siRNA-transfected hASC. For in vitro experiments, cigarette smoke extract (CSE) was used to mimic the toxicity of CS exposure. Analysis of bone marrow hematopoietic progenitor cells (HPC) were performed both by flow cytometry and colony forming unit assays. Results In this study, we demonstrate that as few as three days of CS exposure result in marked cycling arrest and diminished clonogenic capacity of HPC, followed by depletion of phenotypically-defined HSC/HPC. Intravenous injection of hASC substantially ameliorated both acute and chronic CS-induced myelosuppression. This effect was specifically dependent on the anti-inflammatory factor TSG-6, which is induced from xenografted hASC, primarily located in the lung and capable of responding to host inflammatory signals. Gene expression analysis within bone marrow HSC/HPC revealed several specific signaling molecules altered by CS and normalized by hASC. Conclusion Our results suggest that systemic administration of hASC or TSG-6 may be novel approaches to reverse cigarette smoking-induced myelosuppression. PMID:25329668

  3. Phenotypical and molecular distinctness of sinonasal haemangiopericytoma compared to solitary fibrous tumour of the sinonasal tract.

    PubMed

    Agaimy, Abbas; Barthelmeß, Sarah; Geddert, Helene; Boltze, Carsten; Moskalev, Evgeny A; Koch, Michael; Wiemann, Stefan; Hartmann, Arndt; Haller, Florian

    2014-11-01

    Sinonasal haemangiopericytoma (SN-HPC) is a rare sinonasal mesenchymal neoplasm of perivascular myoid cell origin. Solitary fibrous tumour (SFT) occurs only very rarely in the sinonasal tract. SFT and soft tissue HPC have been considered a single entity. Recently, recurrent gene fusions involving NAB2-STAT6 resulting in differential expression of STAT6 were characterized as central molecular events in SFT. However, no data exist for NAB2-STAT6 status or STAT6 expression in SN-HPC. We examined six SN-HPCs and two sinonasal SFTs by immunohistochemistry and RT-PCR for NAB2-STAT6 fusions. SN-HPC affected three females and three males (mean age: 72 years). They expressed smooth muscle actin, lacked strong CD34 reactivity and were negative for nuclear STAT6 expression. RT-PCR analysis confirmed the absence of NAB2-STAT6 fusions in all cases. Conversely, both sinonasal SFTs (in males aged 39 and 52 years) displayed classical features of pleuropulmonary and soft-tissue SFTs (uniformly CD34-positive with strong nuclear expression of STAT6). RT-PCR revealed NAB2-STAT6 fusions in both cases. These findings confirm the molecular and phenotypical distinctness of these two entities. While SN-HPC is a site-specific sinonasal neoplasm of as yet unknown molecular pathogenesis, sinonasal SFTs show phenotypical and molecular identity to their pleural/extrapleural counterparts. © 2014 John Wiley & Sons Ltd.

  4. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Koo, Michelle; Cao, Yu

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less

  6. An Efficient Silent Data Corruption Detection Method with Error-Feedback Control and Even Sampling for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Berrocal, Eduardo; Cappello, Franck

    The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less

  7. Decarboxylation of 6-nitrobenzisoxazole-3-carboxylate in mixed micelles of zwitterionic and positively charged surfactants.

    PubMed

    Maximiano, Flavio A; Chaimovich, Hernan; Cuccovia, Iolanda M

    2006-09-12

    The rate of decarboxylation of 6-nitrobenzisoxazole-3-carboxylate, NBOC, was determined in micelles of N-hexadecyl-N,N,N-trimethylammonium bromide or chloride (CTAB or CTAC), N-hexadecyl-N,N-dimethyl-3-ammonium-1-propanesulfonate (HPS), N-dodecyl-N,N-dimethyl-3-ammonium-1-propanesulfonate (DPS), N-dodecyl-N,N,N-trimethylammonium bromide (DTAB), hexadecylphosphocholine (HPC), and their mixtures. Quantitative analysis of the effect on micelles on the velocity of NBOC decarboxylation allowed the estimation of the rate constants in the micellar pseudophase, k(m), for the pure surfactants and their mixtures. The extent of micellar catalysis for NBOC decarboxylation, expressed as the ratio k(m)/k(w), where k(w) is the rate constant in water, varied from 240 for HPS to 62 for HPC. With HPS or DPS, k(m) decreased linearly with CTAB(C) mole fraction, suggesting ideal mixing. With HPC, k(m) increased to a maximum at a CTAB(C) mole fraction of ca. 0.5 and then decreased at higher CTAB(C). Addition of CTAB(C) to HPC, where the negative charge of the surfactant is close to the hydrophobic core, produces tight ion pairs at the interface and, consequently, decreases interfacial water contents. Interfacial dehydration at the surface in equimolar HPC/CTAB(C) mixtures, and interfacial solubilization site of the substrate, can explain the observed catalytic synergy, since the rate of NBOC decarboxylation increases markedly with the decrease in hydrogen bonding to the carboxylate group.

  8. Towards Anatomic Scale Agent-Based Modeling with a Massively Parallel Spatially Explicit General-Purpose Model of Enteric Tissue (SEGMEnT_HPC)

    PubMed Central

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784

  9. High-pressure coolant effect on the surface integrity of machining titanium alloy Ti-6Al-4V: a review

    NASA Astrophysics Data System (ADS)

    Liu, Wentao; Liu, Zhanqiang

    2018-03-01

    Machinability improvement of titanium alloy Ti-6Al-4V is a challenging work in academic and industrial applications owing to its low thermal conductivity, low elasticity modulus and high chemical affinity at high temperatures. Surface integrity of titanium alloys Ti-6Al-4V is prominent in estimating the quality of machined components. The surface topography (surface defects and surface roughness) and the residual stress induced by machining Ti-6Al-4V occupy pivotal roles for the sustainability of Ti-6Al-4V components. High-pressure coolant (HPC) is a potential choice in meeting the requirements for the manufacture and application of Ti-6Al-4V. This paper reviews the progress towards the improvements of Ti-6Al4V surface integrity under HPC. Various researches of surface integrity characteristics have been reported. In particularly, surface roughness, surface defects, residual stress as well as work hardening are investigated in order to evaluate the machined surface qualities. Several coolant parameters (including coolant type, coolant pressure and the injection position) deserve investigating to provide the guidance for a satisfied machined surface. The review also provides a clear roadmap for applications of HPC in machining Ti-6Al4V. Experimental studies and analysis are reviewed to better understand the surface integrity under HPC machining process. A distinct discussion has been presented regarding the limitations and highlights of the prospective for machining Ti-6Al4V under HPC.

  10. A harmonic polynomial cell (HPC) method for 3D Laplace equation with application in marine hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.

    2014-10-01

    We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less

  11. Development of Evaluation Indicators for Hospice and Palliative Care Professionals Training Programs in Korea.

    PubMed

    Kang, Jina; Park, Kyoung-Ok

    2017-01-01

    The importance of training for Hospice and Palliative Care (HPC) professionals has been increasing with the systemization of HPC in Korea. Hence, the need and importance of training quality for HPC professionals are growing. This study evaluated the construct validity and reliability of the Evaluation Indicators for standard Hospice and Palliative Care Training (EIHPCT) program. As a framework to develop evaluation indicators, an invented theoretical model combining Stufflebeam's CIPP (Context-Input-Process-Product) evaluation model with PRECEDE-PROCEED model was used. To verify the construct validity of the EIHPCT program, a structured survey was performed with 169 professionals who were the HPC training program administrators, trainers, and trainees. To examine the validity of the areas of the EIHPCT program, exploratory factor analysis and confirmatory factor analysis were conducted. First, in the exploratory factor analysis, the indicators with factor loadings above 0.4 were chosen as desirable items, and some cross-loaded items that loaded at 0.4 or higher on two or more factors were adjusted as the higher factor. Second, the model fit of the modified EIHPCT program was quite good in the confirmatory factor analysis (Goodness-of-Fit Index > 0.70, Comparative Fit Index > 0.80, Normed Fit Index > 0.80, Root Mean square of Residuals < 0.05). The modified model of the EIHPCT comprised 4 areas, 13 subdomains, and 61 indicators. The evaluation indicators of the modified model will be valuable references for improving the HPC professional training program.

  12. H(C)P and H(P)C triple-resonance experiments at natural abundance employing long-range couplings.

    PubMed

    Malon, Michal; Koshino, Hiroyuki

    2007-09-01

    Modified two-dimensional (2D) triple-resonance H(C)P and H(P)C experiments based on INEPT/HMQC and double-INEPT schemes are applied to the study of organophosphorus compounds at natural abundances. The implementation of effective (1)H--(13)C gradient selection, additional purging pulsed field gradients, spinlock pulses, and improved phase cycling is demonstrated to allow weak correlation signals based on long-range couplings to be readily observed. Through the combination of two heteronuclear long-range coupling constants, (n)J(CH) and (n)J(PC) in H(C)P experiments or (n)J(PH) and (n)J(PC) in H(P)C experiments, protons can be correlated to a second heteronucleus through 4-7 chemical bonds. These experiments thus overcome the inherit limitations of classical (1)H-X HMBC experiments, which require a nonzero value of the heteronuclear coupling constant (n)J(XH). Ultra-broadband inversion composite pulses are successfully employed in the H(P)C INEPT/HMQC and H(P)C double-INEPT pulse sequences to increase the utility of the experiments and the quality of obtained spectra. This work extends and completes a set of 2D phase-sensitive triple-resonance experiments applicable at natural abundances, and also offers insight into the methodology of triple-resonance experiments and the application of pulsed field gradients. A one-dimensional triple-resonance experiment employing carbon detection is suggested for accurate determination of small (n)J(PC).

  13. Influence of temperature and relative humidity conditions on the pan coating of hydroxypropyl cellulose molded capsules.

    PubMed

    Macchi, Elena; Zema, Lucia; Pandey, Preetanshu; Gazzaniga, Andrea; Felton, Linda A

    2016-03-01

    In a previous study, hydroxypropyl cellulose (HPC)-based capsular shells prepared by injection molding and intended for pulsatile release were successfully coated with 10mg/cm(2) Eudragit® L film. The suitability of HPC capsules for the development of a colon delivery platform based on a time dependent approach was demonstrated. In the present work, data logging devices (PyroButton®) were used to monitor the microenvironmental conditions, i.e. temperature (T) and relative humidity (RH), during coating processes performed under different spray rates (1.2, 2.5 and 5.5g/min). As HPC-based capsules present special features, a preliminary study was conducted on commercially available gelatin capsules for comparison purposes. By means of PyroButton data-loggers it was possible to acquire information about the impact of the effective T and RH conditions experienced by HPC substrates during the process on the technological properties and release performance of the coated systems. The use of increasing spray rates seemed to promote a tendency of the HPC shells to slightly swell at the beginning of the spraying process; moreover, capsules coated under spray rates of 1.2 and 2.5g/min showed the desired release performance, i.e. ability to withstand the acidic media followed by the pulsatile release expected for uncoated capsules. Preliminary stability studies seemed to show that coating conditions might also influence the release performance of the system upon storage. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Effect of rice husk ash and fly ash on the compressive strength of high performance concrete

    NASA Astrophysics Data System (ADS)

    Van Lam, Tang; Bulgakov, Boris; Aleksandrova, Olga; Larsen, Oksana; Anh, Pham Ngoc

    2018-03-01

    The usage of industrial and agricultural wastes for building materials production plays an important role to improve the environment and economy by preserving nature materials and land resources, reducing land, water and air pollution as well as organizing and storing waste costs. This study mainly focuses on mathematical modeling dependence of the compressive strength of high performance concrete (HPC) at the ages of 3, 7 and 28 days on the amount of rice husk ash (RHA) and fly ash (FA), which are added to the concrete mixtures by using the Central composite rotatable design. The result of this study provides the second-order regression equation of objective function, the images of the surface expression and the corresponding contours of the objective function of the regression equation, as the optimal points of HPC compressive strength. These objective functions, which are the compressive strength values of HPC at the ages of 3, 7 and 28 days, depend on two input variables as: x1 (amount of RHA) and x2 (amount of FA). The Maple 13 program, solving the second-order regression equation, determines the optimum composition of the concrete mixture for obtaining high performance concrete and calculates the maximum value of the HPC compressive strength at the ages of 28 days. The results containMaxR28HPC = 76.716 MPa when RHA = 0.1251 and FA = 0.3119 by mass of Portland cement.

  15. High-energy supercapacitors based on hierarchical porous carbon with an ultrahigh ion-accessible surface area in ionic liquid electrolytes

    NASA Astrophysics Data System (ADS)

    Zhong, Hui; Xu, Fei; Li, Zenghui; Fu, Ruowen; Wu, Dingcai

    2013-05-01

    A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m2 g-1 shows an extremely high energy density, i.e., 118 W h kg-1 at a power density of 100 W kg-1. This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer.A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m2 g-1 shows an extremely high energy density, i.e., 118 W h kg-1 at a power density of 100 W kg-1. This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer. Electronic supplementary information (ESI) available: Sample preparation, material characterization, electrochemical characterization and specific mass capacitance and energy density. See DOI: 10.1039/c3nr00738c

  16. Fine-needle aspiration cytology of hemangiopericytoma: A report of five cases.

    PubMed

    Chhieng, D; Cohen, J M; Waisman, J; Fernandez, G; Cangiarella, J

    1999-08-25

    Hemangiopericytoma (HPC) is a relatively rare neoplasm, accounting for approximately 2.5% of all soft tissue tumors. Its histopathology has been well documented but to the authors' knowledge reports regarding its fine-needle aspiration (FNA) cytology rarely are encountered. In the current study the authors report the cytologic findings in FNA specimens from nine confirmed cases of HPC and attempt to correlate the cytologic features with the biologic outcomes. FNA was performed with or without radiologic guidance. Corresponding sections of tissue were reviewed in conjunction with the cytologic preparations. Nine FNAs were performed in 5 patients (3 men and 2 women) with an age range of 38-77 years (mean, 56 years). Two lesions were primary soft tissue lesions arising in the lower extremities; seven were recurrent or metastatic lesions from bone (one lesion), kidney (one lesion), pelvic fossa (one lesion), lower extremities (two lesions), trunk (one lesion), and breast (one lesion). All aspirates were cellular and were comprised of single and tightly packed clusters of oval to spindle-shaped cells aggregated around branched capillaries. Basement membrane material was observed in 6 cases (67%). The nuclei were uniform and oval, with finely granular chromatin and inconspicuous nucleoli in all cases except one. No mitotic figures or areas of necrosis were identified. A correct diagnosis of HPC was made on one primary lesion and all recurrent or metastatic lesions. HPCs show a spindle cell pattern in cytologic preparations and must be distinguished from more common spindle cell lesions. The presence of branched capillaries and abundant basement membrane material supports a diagnosis of HPC. Immunohistochemistry and electron microscopy performed on FNA samples may be helpful in the differential diagnosis. FNA is a useful and accurate tool with which to confirm recurrent or metastatic HPC; however, prediction of the biologic behavior of HPC based on cytologic features is not feasible. Cancer (Cancer Cytopathol) Copyright 1999 American Cancer Society.

  17. Inflammation and vascular remodeling in the ventral hippocampus contributes to vulnerability to stress.

    PubMed

    Pearson-Leary, J; Eacret, D; Chen, R; Takano, H; Nicholas, B; Bhatnagar, S

    2017-06-27

    During exposure to chronic stress, some individuals engage in active coping behaviors that promote resiliency to stress. Other individuals engage in passive coping that is associated with vulnerability to stress and with anxiety and depression. In an effort to identify novel molecular mechanisms that underlie vulnerability or resilience to stress, we used nonbiased analyses of microRNAs in the ventral hippocampus (vHPC) to identify those miRNAs differentially expressed in active (long-latency (LL)/resilient) or passive (short-latency (SL)/vulnerable) rats following chronic social defeat. In the vHPC of active coping rats, miR-455-3p level was increased, while miR-30e-3p level was increased in the vHPC of passive coping rats. Pathway analyses identified inflammatory and vascular remodeling pathways as enriched by genes targeted by these microRNAs. Utilizing several independent markers for blood vessels, inflammatory processes and neural activity in the vHPC, we found that SL/vulnerable rats exhibit increased neural activity, vascular remodeling and inflammatory processes that include both increased blood-brain barrier permeability and increased number of microglia in the vHPC relative to control and resilient rats. To test the relevance of these changes for the development of the vulnerable phenotype, we used pharmacological approaches to determine the contribution of inflammatory processes in mediating vulnerability and resiliency. Administration of the pro-inflammatory cytokine vascular endothelial growth factor-164 increased vulnerability to stress, while the non-steroidal anti-inflammatory drug meloxicam attenuated vulnerability. Collectively, these results show that vulnerability to stress is determined by a re-designed neurovascular unit characterized by increased neural activity, vascular remodeling and pro-inflammatory mechanisms in the vHPC. These results suggest that dampening inflammatory processes by administering anti-inflammatory agents reduces vulnerability to stress. These results have translational relevance as they suggest that administration of anti-inflammatory agents may reduce the impact of stress or trauma in vulnerable individuals.

  18. Clearing your Desk! Software and Data Services for Collaborative Web Based GIS Analysis

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Gichamo, T.; Yildirim, A. A.; Liu, Y.

    2015-12-01

    Can your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.

  19. Inflammation and vascular remodeling in the ventral hippocampus contributes to vulnerability to stress

    PubMed Central

    Pearson-Leary, J; Eacret, D; Chen, R; Takano, H; Nicholas, B; Bhatnagar, S

    2017-01-01

    During exposure to chronic stress, some individuals engage in active coping behaviors that promote resiliency to stress. Other individuals engage in passive coping that is associated with vulnerability to stress and with anxiety and depression. In an effort to identify novel molecular mechanisms that underlie vulnerability or resilience to stress, we used nonbiased analyses of microRNAs in the ventral hippocampus (vHPC) to identify those miRNAs differentially expressed in active (long-latency (LL)/resilient) or passive (short-latency (SL)/vulnerable) rats following chronic social defeat. In the vHPC of active coping rats, miR-455-3p level was increased, while miR-30e-3p level was increased in the vHPC of passive coping rats. Pathway analyses identified inflammatory and vascular remodeling pathways as enriched by genes targeted by these microRNAs. Utilizing several independent markers for blood vessels, inflammatory processes and neural activity in the vHPC, we found that SL/vulnerable rats exhibit increased neural activity, vascular remodeling and inflammatory processes that include both increased blood–brain barrier permeability and increased number of microglia in the vHPC relative to control and resilient rats. To test the relevance of these changes for the development of the vulnerable phenotype, we used pharmacological approaches to determine the contribution of inflammatory processes in mediating vulnerability and resiliency. Administration of the pro-inflammatory cytokine vascular endothelial growth factor-164 increased vulnerability to stress, while the non-steroidal anti-inflammatory drug meloxicam attenuated vulnerability. Collectively, these results show that vulnerability to stress is determined by a re-designed neurovascular unit characterized by increased neural activity, vascular remodeling and pro-inflammatory mechanisms in the vHPC. These results suggest that dampening inflammatory processes by administering anti-inflammatory agents reduces vulnerability to stress. These results have translational relevance as they suggest that administration of anti-inflammatory agents may reduce the impact of stress or trauma in vulnerable individuals. PMID:28654094

  20. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  1. Selected results from LLNL-Hughes RAR for West Coast Scotland Experiment 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.; Johnston, B.; Twogood, R.

    1993-01-05

    The joint US-UK 1992 West Coast Scotland Experiment (WCSEX) was held in the Sound of Sleat from June 6 to 25. The LLNL-Hughes team fielded a fully polarimetric X-band hill-side real aperture radar to collect internal wave wake data. We present here a sample data set of the best radar runs.

  2. Development of Operational Free-Space-Optical (FSO) Laser Communication Systems Final Report CRADA No. TC02093.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggiero, A.; Orgren, A.

    This project was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL) and LGS Innovations, LLC (formerly Lucent Technologies, Inc.), to develop long-range and mobile operational free-space optical (FSO) laser communication systems for specialized government applications. LLNL and LGS Innovations formerly Lucent Bell Laboratories Government Communications Systems performed this work for a United States Government (USG) Intelligence Work for Others (I-WFO) customer, also referred to as "Government Customer", or "Customer" and "Government Sponsor." The CRADA was a critical and required part of the LLNL technology transfer plan formore » the customer.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, C.; Arsenlis, T.; Bailey, A.

    Lawrence Livermore National Laboratory Campus Capability Plan for 2018-2028. Lawrence Livermore National Laboratory (LLNL) is one of three national laboratories that are part of the National Nuclear Security Administration. LLNL provides critical expertise to strengthen U.S. security through development and application of world-class science and technology that: Ensures the safety, reliability, and performance of the U.S. nuclear weapons stockpile; Promotes international nuclear safety and nonproliferation; Reduces global danger from weapons of mass destruction; Supports U.S. leadership in science and technology. Essential to the execution and continued advancement of these mission areas are responsive infrastructure capabilities. This report showcases each LLNLmore » capability area and describes the mission, science, and technology efforts enabled by LLNL infrastructure, as well as future infrastructure plans.« less

  4. Switching the JLab Accelerator Operations Environment from an HP-UX Unix-based to a PC/Linux-based environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcguckin, Theodore

    2008-10-01

    The Jefferson Lab Accelerator Controls Environment (ACE) was predominantly based on the HP-UX Unix platform from 1987 through the summer of 2004. During this period the Accelerator Machine Control Center (MCC) underwent a major renovation which included introducing Redhat Enterprise Linux machines, first as specialized process servers and then gradually as general login servers. As computer programs and scripts required to run the accelerator were modified, and inherent problems with the HP-UX platform compounded, more development tools became available for use with Linux and the MCC began to be converted over. In May 2008 the last HP-UX Unix login machinemore » was removed from the MCC, leaving only a few Unix-based remote-login servers still available. This presentation will explore the process of converting an operational Control Room environment from the HP-UX to Linux platform as well as the many hurdles that had to be overcome throughout the transition period (including a discussion of« less

  5. Real Time Linux - The RTOS for Astronomy?

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads for this presentation.

  6. The impact of the U.S. supercomputing initiative will be global

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  7. System Resource Allocation Requests | High-Performance Computing | NREL

    Science.gov Websites

    Account to utilize the online allocation request system. If you need a HPC User Account, please request one online: Visit User Accounts. Click the green "Request Account" Button - this will direct . Follow the online instructions provided in the DocuSign form. Write "Need HPC User Account to use

  8. High Performance Computing (HPC) Innovation Service Portal Pilots Cloud Computing (HPC-ISP Pilot Cloud Computing)

    DTIC Science & Technology

    2011-08-01

    5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http

  9. 75 FR 80112 - Designation of Three Individuals and Seven Entities Pursuant to Executive Order 13224

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-21

    ... INDUSTRIES; a.k.a. GOLFRATE HPC INDUSTRIES; a.k.a. GOLFRATE PAINTS (TINTAS DE DYRUP)), Avenida 4 de Fevereiro... Distribution, Golfrate Food Industries, Golfrate HPC Industries and Golfrate Paints (Tintas de Dyrup) are..., Banjul, The Gambia; Pipeline Road, Banjul, The Gambia [SDGT] 9. OVLAS TRADING S.A. (a.k.a. OVLAS TRADING...

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klitsner, Tom

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  11. Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Canon, Shane

    2018-01-24

    DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  12. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  13. Comparison of cryopreservation bags for hematopoietic progenitor cells using a WBC-enriched product.

    PubMed

    Dijkstra-Tiekstra, Margriet J; Hazelaar, Sandra; Gkoumassi, Effimia; Weggemans, Margienus; de Wildt-Eggen, Janny

    2015-04-01

    Hematopoietic progenitor cells (HPC) are stored in cryopreservation bags that are resistant to liquid nitrogen. Since Cryocyte bags of Baxter (B-bags) are no longer available, an alternative bag was sought. Also, the influence of freezing volume was studied. Miltenyi Biotec (MB)- and MacoPharma (MP)-bags passed the integrity tests without failure. Comparing MB- and MP-bags with B-bags, no difference in WBC recovery or viability was found when using a WBC-enriched product as a "dummy" HPC product. Further, a freezing volume of 30 mL resulted in better WBC recovery and viability than 60 mL. Additonal studies using real HPC might be necessary. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Thrust Area Report, Engineering Research, Development and Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langland, R. T.

    1997-02-01

    The mission of the Engineering Research, Development, and Technology Program at Lawrence Livermore National Laboratory (LLNL) is to develop the knowledge base, process technologies, specialized equipment, tools and facilities to support current and future LLNL programs. Engineering`s efforts are guided by a strategy that results in dual benefit: first, in support of Department of Energy missions, such as national security through nuclear deterrence; and second, in enhancing the nation`s economic competitiveness through our collaboration with U.S. industry in pursuit of the most cost- effective engineering solutions to LLNL programs. To accomplish this mission, the Engineering Research, Development, and Technology Programmore » has two important goals: (1) identify key technologies relevant to LLNL programs where we can establish unique competencies, and (2) conduct high-quality research and development to enhance our capabilities and establish ourselves as the world leaders in these technologies. To focus Engineering`s efforts technology {ital thrust areas} are identified and technical leaders are selected for each area. The thrust areas are comprised of integrated engineering activities, staffed by personnel from the nine electronics and mechanical engineering divisions, and from other LLNL organizations. This annual report, organized by thrust area, describes Engineering`s activities for fiscal year 1996. The report provides timely summaries of objectives, methods, and key results from eight thrust areas: Computational Electronics and Electromagnetics; Computational Mechanics; Microtechnology; Manufacturing Technology; Materials Science and Engineering; Power Conversion Technologies; Nondestructive Evaluation; and Information Engineering. Readers desiring more information are encouraged to contact the individual thrust area leaders or authors. 198 refs., 206 figs., 16 tabs.« less

  15. Report on the B-Fields at NIF Workshop Held at LLNL October 12-13, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, K. B.; Moody, J. D.

    2015-12-13

    A national ICF laboratory workshop on requirements for a magnetized target capability on NIF was held by NIF at LLNL on October 12 and 13, attended by experts from LLNL, SNL, LLE, LANL, GA, and NRL. Advocates for indirect drive (LLNL), magnetic (Z) drive (SNL), polar direct drive (LLE), and basic science needing applied B (many institutions) presented and discussed requirements for the magnetized target capabilities they would like to see. 30T capability was most frequently requested. A phased operation increasing the field in steps experimentally can be envisioned. The NIF management will take the inputs from the scientific communitymore » represented at the workshop and recommend pulse-powered magnet parameters for NIF that best meet the collective user requests. In parallel, LLNL will continue investigating magnets for future generations that might be powered by compact laser-B-field generators (Moody, Fujioka, Santos, Woolsey, Pollock). The NIF facility engineers will start to analyze compatibility of the recommended pulsed magnet parameters (size, field, rise time, materials) with NIF chamber constraints, diagnostic access, and final optics protection against debris in FY16. The objective of this assessment will be to develop a schedule for achieving an initial Bfield capability. Based on an initial assessment, room temperature magnetized gas capsules will be fielded on NIF first. Magnetized cryo-ice-layered targets will take longer (more compatibility issues). Magnetized wetted foam DT targets (Olson) may have somewhat fewer compatibility issues making them a more likely choice for the first cryo-ice-layered target fielded with applied Bz.« less

  16. LINC Modeling of August 19, 2004 Queen City Barrel Company Fire In Cincinnati, OH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillon, M B; Nasstrom, J S; Baskett, R L

    This report details the information received, assumptions made, actions taken, and products delivered by the Lawrence Livermore National Laboratory (LLNL) during the August 19, 2004 fire at the Queen City Barrel Company (QCB) in Cincinnati, OH. During the course of the event, LLNL provided four sets of plume model products to various Cincinnati emergency response organizations.

  17. Gas Atomization Equipment Statement of Work and Specification for Engineering design, Fabrication, Testing, and Installation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutaleb, T.; Pluschkell, T. P.

    The Gas Atomization Equipment will be used to fabricate metallic powder suitable for Powder Bed Fusion additive Manufacturing material to support Lawrence Livermore National Laboratory (LLNL) research and development. The project will modernize our capabilities to develop spherical reactive, refractory, and radioactive powders in the 10-75 μm diameter size range at LLNL.

  18. Silicon microelectronic field-emissive devices for advanced display technology

    NASA Astrophysics Data System (ADS)

    Morse, J. D.

    1993-03-01

    Field-emission displays (FED's) offer the potential advantages of high luminous efficiency, low power consumption, and low cost compared to AMLCD or CRT technologies. An LLNL team has developed silicon-point field emitters for vacuum triode structures and has also used thin-film processing techniques to demonstrate planar edge-emitter configurations. LLNL is interested in contributing its experience in this and other FED-related technologies to collaborations for commercial FED development. At LLNL, FED development is supported by computational capabilities in charge transport and surface/interface modeling in order to develop smaller, low-work-function field emitters using a variety of materials and coatings. Thin-film processing, microfabrication, and diagnostic/test labs permit experimental exploration of emitter and resistor structures. High field standoff technology is an area of long-standing expertise that guides development of low-cost spacers for FEDS. Vacuum sealing facilities are available to complete the FED production engineering process. Drivers constitute a significant fraction of the cost of any flat-panel display. LLNL has an advanced packaging group that can provide chip-on-glass technologies and three-dimensional interconnect generation permitting driver placement on either the front or the back of the display substrate.

  19. Advanced Analog Signal Processing for Fuzing Final Report CRADA No. TC-1306-96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, C. Y.; Spencer, D.

    The purpose of this CRADA between LLNL and Kaman Aerospace/Raymond Engineering Operations (Raymond) was to demonstrate the feasibility of using Analog/Digital Neural Network (ANN) Technology for advanced signal processing, fuzing, and other applications. This cooperation sought to Ieverage the expertise and capabilities of both parties--Raymond to develop the signature recognition hardware system, using Raymond’s extensive experience in the area of system development plus Raymond’s knowledge of military applications, and LLNL to apply ANN and related technologies to an area of significant interest to the United States government. This CRADA effort was anticipated to be a three-year project consisting of threemore » phases: Phase I, Proof-of-Principle Demonstration; Phase II, Proof-of-Design, involving the development of a form-factored integrated sensor and ANN technology processo~ and Phase III, Final Design and Release of the integrated sensor and ANN fabrication process: Under Phase I, to be conducted during calendar year 1996, Raymond was to deliver to LLNL an architecture (design) for an ANN chip. LLNL was to translate the design into a stepper mask and to produce and test a prototype chip from the Raymond design.« less

  20. 2013 R&D 100 Award: New tech could mean more power for fiber lasers

    ScienceCinema

    Dawson, Jay

    2018-01-16

    An LLNL team of six physicists has developed a new technology that is a stepping stone to enable some of the limitations on high-power fiber lasers to be overcome. Their technology, dubbed "Efficient Mode-Converters for High-Power Fiber Amplifiers," allows the power of fiber lasers to be increased while maintaining high beam quality. Currently, fiber lasers are used in machining, on factory floors and in a number of defense applications and can produce tens of kilowatts of power.The conventional fiber laser design features a circular core and has fundamental limitations that make it impractical to allow higher laser power unless the core area is increased. LLNL researchers have pioneered a design to increase the laser's core area along the axis of the ribbon fiber. Their design makes it difficult to use a conventional laser beam, so the LLNL team converted the beam into a profile that propagates into the ribbon fiber and is converted back once it is amplified. The use of this LLNL technology will permit the construction of higher power lasers for lower costs and increase the power of fiber lasers from tens of kilowatts of power to about 100 kilowatts and potentially even higher.

  1. Linux Incident Response Volatile Data Analysis Framework

    ERIC Educational Resources Information Center

    McFadden, Matthew

    2013-01-01

    Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…

  2. Onboard Flow Sensing For Downwash Detection and Avoidance On Small Quadrotor Helicopters

    DTIC Science & Technology

    2015-01-01

    onboard computers, one for flight stabilization and a Linux computer for sensor integration and control calculations . The Linux computer runs Robot...Hirokawa, D. Kubo , S. Suzuki, J. Meguro, and T. Suzuki. Small uav for immediate hazard map generation. In AIAA Infotech@Aerospace Conf, May 2007. 8F

  3. Cross platform development using Delphi and Kylix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, J.L.; Nishimura, H.; Timossi, C.

    2002-10-08

    A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component.

  4. Nuclear β-Catenin Expression is Frequent in Sinonasal Hemangiopericytoma and Its Mimics.

    PubMed

    Jo, Vickie Y; Fletcher, Christopher D M

    2017-06-01

    Sinonasal hemangiopericytoma (HPC) is a tumor showing pericytic myoid differentiation and which arises in the nasal cavity and paranasal sinuses. CTNNB1 mutations appear to be a consistent aberration in sinonasal HPC, and nuclear expression of β-catenin has been reported. Our aim was to evaluate the frequency of β-catenin expression in sinonasal HPC and its histologic mimics in the upper aerodigestive tract. Cases were retrieved from the surgical pathology and consultation files. Immunohistochemical staining for β-catenin was performed on 50 soft tissue tumors arising in the sinonasal tract or oral cavity, and nuclear staining was recorded semiquantitatively by extent and intensity. Nuclear reactivity for β-catenin was present in 19/20 cases of sinonasal HPC; 17 showed moderate-to-strong multifocal or diffuse staining, and 2 had moderate focal nuclear reactivity. All solitary fibrous tumors (SFT) (10/10) showed focal-to-multifocal nuclear staining, varying from weak to strong in intensity. Most cases of synovial sarcoma (9/10) showed nuclear β-catenin expression in the spindle cell component, ranging from focal-weak to strong-multifocal. No cases of myopericytoma (0/10) showed any nuclear β-catenin expression. β-catenin expression is prevalent in sinonasal HPC, but is also frequent in SFT and synovial sarcoma. Our findings indicate that β-catenin is not a useful diagnostic tool in the evaluation of spindle cell tumors with a prominent hemangiopericytoma-like vasculature in the sinonasal tract and oral cavity, and that definitive diagnosis relies on the use of a broader immunohistochemical panel.

  5. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  6. Towards Cloud-based Asynchronous Elasticity for Iterative HPC Applications

    NASA Astrophysics Data System (ADS)

    da Rosa Righi, Rodrigo; Facco Rodrigues, Vinicius; André da Costa, Cristiano; Kreutz, Diego; Heiss, Hans-Ulrich

    2015-10-01

    Elasticity is one of the key features of cloud computing. It allows applications to dynamically scale computing and storage resources, avoiding over- and under-provisioning. In high performance computing (HPC), initiatives are normally modeled to handle bag-of-tasks or key-value applications through a load balancer and a loosely-coupled set of virtual machine (VM) instances. In the joint-field of Message Passing Interface (MPI) and tightly-coupled HPC applications, we observe the need of rewriting source codes, previous knowledge of the application and/or stop-reconfigure-and-go approaches to address cloud elasticity. Besides, there are problems related to how profit this new feature in the HPC scope, since in MPI 2.0 applications the programmers need to handle communicators by themselves, and a sudden consolidation of a VM, together with a process, can compromise the entire execution. To address these issues, we propose a PaaS-based elasticity model, named AutoElastic. It acts as a middleware that allows iterative HPC applications to take advantage of dynamic resource provisioning of cloud infrastructures without any major modification. AutoElastic provides a new concept denoted here as asynchronous elasticity, i.e., it provides a framework to allow applications to either increase or decrease their computing resources without blocking the current execution. The feasibility of AutoElastic is demonstrated through a prototype that runs a CPU-bound numerical integration application on top of the OpenNebula middleware. The results showed the saving of about 3 min at each scaling out operations, emphasizing the contribution of the new concept on contexts where seconds are precious.

  7. On the dynamic nature of the engram: evidence for circuit-level reorganization of object memory traces following reactivation.

    PubMed

    Winters, Boyer D; Tucci, Mark C; Jacklin, Derek L; Reid, James M; Newsome, James

    2011-11-30

    Research has implicated the perirhinal cortex (PRh) in several aspects of object recognition memory. The specific role of the hippocampus (HPC) remains controversial, but its involvement in object recognition may pertain to processing contextual information in relation to objects rather than object representation per se. Here we investigated the roles of the PRh and HPC in object memory reconsolidation using the spontaneous object recognition task for rats. Intra-PRh infusions of the protein synthesis inhibitor anisomycin immediately following memory reactivation prevented object memory reconsolidation. Similar deficits were observed when a novel object or a salient contextual change was introduced during the reactivation phase. Intra-HPC infusions of anisomycin, however, blocked object memory reconsolidation only when a contextual change was introduced during reactivation. Moreover, disrupting functional interaction between the HPC and PRh by infusing anisomycin unilaterally into each structure in opposite hemispheres also impaired reconsolidation when reactivation was done in an altered context. These results show for the first time that the PRh is critical for reconsolidation of object memory traces and provide insight into the dynamic process of object memory storage; the selective requirement for hippocampal involvement following reactivation in an altered context suggests a substantial circuit level object trace reorganization whereby an initially PRh-dependent object memory becomes reliant on both the HPC and PRh and their interaction. Such trace reorganization may play a central role in reconsolidation-mediated memory updating and could represent an important aspect of lingering consolidation processes proposed to underlie long-term memory modulation and stabilization.

  8. Applying Science and Technology to Combat WMD Terrorism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wuest, C R; Werne, R W; Colston, B W

    2006-05-04

    Lawrence Livermore National Laboratory (LLNL) is developing and fielding advanced strategies that dramatically improve the nation's capabilities to prevent, prepare for, detect, and respond to terrorist use of chemical, biological, radiological, nuclear, and explosive (CBRNE) weapons. The science, technology, and integrated systems we provide are informed by and developed with key partners and end users. LLNL's long-standing role as one of the two principle U.S. nuclear weapons design laboratories has led to significant resident expertise for health effects of exposure to radiation, radiation detection technologies, characterization of radioisotopes, and assessment and response capabilities for terrorist nuclear weapons use. This papermore » provides brief overviews of a number of technologies developed at LLNL that are being used to address national security needs to confront the growing threats of CBRNE terrorism.« less

  9. Applying science and technology to combat WMD terrorism

    NASA Astrophysics Data System (ADS)

    Wuest, Craig R.; Werne, Roger W.; Colston, Billy W.; Hartmann-Siantar, Christine L.

    2006-05-01

    Lawrence Livermore National Laboratory (LLNL) is developing and fielding advanced strategies that dramatically improve the nation's capabilities to prevent, prepare for, detect, and respond to terrorist use of chemical, biological, radiological, nuclear, and explosive (CBRNE) weapons. The science, technology, and integrated systems we provide are informed by and developed with key partners and end users. LLNL's long-standing role as one of the two principle U.S. nuclear weapons design laboratories has led to significant resident expertise for health effects of exposure to radiation, radiation detection technologies, characterization of radioisotopes, and assessment and response capabilities for terrorist nuclear weapons use. This paper provides brief overviews of a number of technologies developed at LLNL that are being used to address national security needs to confront the growing threats of CBRNE terrorism.

  10. The Activity of Thalamic Nucleus Reuniens Is Critical for Memory Retrieval, but Not Essential for the Early Phase of "Off-Line" Consolidation

    ERIC Educational Resources Information Center

    Mei, Hao; Logothetis, Nikos K.; Eschenko, Oxana

    2018-01-01

    Spatial navigation depends on the hippocampal function, but also requires bidirectional interactions between the hippocampus (HPC) and the prefrontal cortex (PFC). The cross-regional communication is typically regulated by critical nodes of a distributed brain network. The thalamic nucleus reuniens (RE) is reciprocally connected to both HPC and…

  11. Combined Therapy against Recurrent Hemangiopericytoma: A Case Report

    PubMed Central

    Li, Xiao-dong; Jiang, Jing-ting; Wu, Chang-ping

    2012-01-01

    Department of Oncology, The Third Affiliated Hospital of Soochow University, Changzhou 213003, China A patient with a seven-year history of recurrent metastatic hemangiopericytoma (HPC) was admitted. During his treatment, he received surgical resection, radiotherapy, radiofrequency hyperthermia and chemotherapy using combined doxorubicin, dacarbazin, vincristine, ginsenoside Rg3, and recombinant human endostatin. This synergistic method provides an encouraging model for treating HPC. PMID:23691471

  12. Effects of herbaceous and woody plant control on longleaf pine growth and understory plant cover

    Treesearch

    James D. Haywood

    2013-01-01

    To determine if either herbaceous or woody plants are more competitive with longleaf pine (Pinus palustris Mill.) trees, four vegetation management treatments— check, herbaceous plant control (HPC), woody plant control (WPC), and HPC+WPC—were applied in newly established longleaf pine plantings in a randomized complete block design in two studies....

  13. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov Websites

    requiring high CPU utilization or large amounts of memory should be run on the worker nodes. WinHPC02 is not associated data are removed when NREL worker status is discontinued. Users should make arrangements to save other users. Licenses are returned to the license pool when other users close the application or after

  14. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  15. Fine-needle aspiration cytology of malignant hemangiopericytomas with ultrastructural and flow cytometric analyses.

    PubMed

    Geisinger, K R; Silverman, J F; Cappellari, J O; Dabbs, D J

    1990-07-01

    A hemangiopericytoma (HPC) is an uncommon soft-tissue neoplasm that may arise in many body sites. The cytologic features of fine-needle aspirates (FNAs) of HPCs have only rarely been described in the literature. We examined FNAs of malignant HPCs from the head and neck region (three) and the retroperitoneum (one) in four adults (aged 38 to 83 years). All four FNAs yielded cellular specimens that consisted of uninuclear tumor cells with high nuclear-cytoplasmic ratios. The cytomorphological spectrum included nuclei that were oval to elongate and had very finely granular, evenly distributed chromatin with one or two small but distinct nucleoli. Hemangiopericytomas yield aspirates that may be considered malignant and may suggest sarcoma. Histologically, all four neoplasms manifested high mitotic activity. The ultrastructural features of all four tumors were supportive of the diagnosis of HPC. Although a specific primary diagnosis of HPC on FNA of a soft-tissue mass is unlikely, cytologic analysis may allow diagnosis of recurrent or metastatic HPC. We were able to perform flow cytometric determinations of tumor DNA content on three of the resected neoplasms. In two, an aneuploid pattern was found, including the neoplasm with the most marked pleomorphism in the FNA. The third was diploid.

  16. Trends in data locality abstractions for HPC systems

    DOE PAGES

    Unat, Didem; Dubey, Anshu; Hoefler, Torsten; ...

    2017-05-10

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity andmore » performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. Furthermore, this paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.« less

  17. 'Papillary' solitary fibrous tumor/hemangiopericytoma with nuclear STAT6 expression and NAB2-STAT6 fusion.

    PubMed

    Ishizawa, Keisuke; Tsukamoto, Yoshitane; Ikeda, Shunsuke; Suzuki, Tomonari; Homma, Taku; Mishima, Kazuhiko; Nishikawa, Ryo; Sasaki, Atsushi

    2016-04-01

    This report describes clinicopathological findings, including genetic data of STAT6, in a solitary fibrous tumor (SFT)/hemangiopericytoma (HPC) of the central nervous system in an 83-year-old woman with a bulge in the left forehead. She noticed it about 5 months before, and it had grown rapidly for the past 1 month. Neuroradiological studies disclosed a well-demarcated tumor that accompanied the destruction of the skull. The excised tumor showed a prominent papillary structure, where atypical cells were compactly arranged along the fibrovascular core ('pseudopapillary'). There was rich vasculature, some of which resembled 'staghorn' vessels. Mitotic figures were occasionally found. Whorls, psammoma bodies, or intra-nuclear pseudoinclusions were not identified. By immunohistochemistry, CD34 was strongly positive in the tumor cells, and STAT6 was localized in their nuclei. By reverse transcription-polymerase chain reaction (RT-PCR), an NAB2-STAT6 fusion gene, NAB2 exon6-STAT6 exon17, was detected, establishing a definite diagnosis of SFT/HPC. 'Papillary' SFT/HPC needs to be recognized as a possible morphological variant of SFT/HPC, and should be borne in mind in its diagnostic practice.

  18. Hierarchical Pore-Patterned Carbon Electrodes for High-Volumetric Energy Density Micro-Supercapacitors.

    PubMed

    Kim, Cheolho; Moon, Jun Hyuk

    2018-06-13

    Micro-supercapacitors (MSCs) are attractive for applications in next-generation mobile and wearable devices and have the potential to complement or even replace lithium batteries. However, many previous MSCs have often exhibited a low volumetric energy density with high-loading electrodes because of the nonuniform pore structure of the electrodes. To address this issue, we introduced a uniform-pore carbon electrode fabricated by 3D interference lithography. Furthermore, a hierarchical pore-patterned carbon (hPC) electrode was formed by introducing a micropore by chemical etching into the macropore carbon skeleton. The hPC electrodes were applied to solid-state MSCs. We achieved a constant volumetric capacitance and a corresponding volumetric energy density for electrodes of various thicknesses. The hPC MSC reached a volumetric energy density of approximately 1.43 mW h/cm 3 . The power density of the hPC MSC was 1.69 W/cm 3 . We could control the capacitance and voltage additionally by connecting the unit MSC cells in series or parallel, and we confirmed the operation of a light-emitting diode. We believe that our pore-patterned electrodes will provide a new platform for compact but high-performance energy storage devices.

  19. Pathogenic features of heterotrophic plate count bacteria from drinking-water boreholes.

    PubMed

    Horn, Suranie; Pieters, Rialet; Bezuidenhout, Carlos

    2016-12-01

    Evidence suggests that heterotrophic plate count (HPC) bacteria may be hazardous to humans with weakened health. We investigated the pathogenic potential of HPC bacteria from untreated borehole water, consumed by humans, for: their haemolytic properties, the production of extracellular enzymes such as DNase, proteinase, lipase, lecithinase, hyaluronidase and chondroitinase, the effect simulated gastric fluid has on their survival, as well as the bacteria's antibiotic-susceptible profile. HuTu-80 cells acted as model for the human intestine and were exposed to the HPC isolates to determine their effects on the viability of the cells. Several HPC isolates were α- or β-haemolytic, produced two or more extracellular enzymes, survived the SGF treatment, and showed resistance against selected antibiotics. The isolates were also harmful to the human intestinal cells to varying degrees. A novel pathogen score was calculated for each isolate. Bacillus cereus had the highest pathogen index: the pathogenicity of the other bacteria declined as follows: Aeromonas taiwanensis > Aeromonas hydrophila > Bacillus thuringiensis > Alcaligenes faecalis > Pseudomonas sp. > Bacillus pumilus > Brevibacillus sp. > Bacillus subtilis > Bacillus sp. These results demonstrated that the prevailing standards for HPCs in drinking water may expose humans with compromised immune systems to undue risk.

  20. Extreme I/O on HPC for HEP using the Burst Buffer at NERSC

    NASA Astrophysics Data System (ADS)

    Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.

Top