Sample records for fastest supercomputers released

  1. TOP500 Supercomputers for June 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  2. TOP500 Supercomputers for November 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  3. TOP500 Supercomputers for June 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamosmore » National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratorymore » (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.« less

  5. TOP500 Sublist for November 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack J.

    2001-11-09

    18th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, GERMANY; KNOXVILLE, TENN.; BERKELEY, CALIF. In what has become a much-anticipated event in the world of high-performance computing, the 18th edition of the TOP500 list of the world's fastest supercomputers was released today (November 9, 2001). The latest edition of the twice-yearly ranking finds IBM as the leader in the field, with 32 percent in terms of installed systems and 37 percent in terms of total performance of all the installed systems. In a surprise move Hewlett-Packard captured the second place with 30 percent of the systems. Most ofmore » these systems are smaller in size and as a consequence HP's share of installed performance is smaller with 15 percent. This is still enough for second place in this category. SGI, Cray and Sun follow in the number of TOP500 systems with 41 (8 percent), 39 (8 percent), and 31 (6 percent) respectively. In the category of installed performance Cray Inc. keeps the third position with 11 percent ahead of SGI (8 percent) and Compaq (8 percent).« less

  6. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  7. NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations

    NASA Image and Video Library

    2016-11-23

    The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.

  8. Science on Sequoia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertsch, Adam; Draeger, Erik; Richards, David

    2017-01-12

    With Sequoia at Lawrence Livermore National Laboratory, researchers explore grand challenging problems and are generating results at scales never before achieved. Sequoia is the first computer to have more than one million processors and is one of the fastest supercomputers in the world.

  9. U.S. DOE Office of Science Success Stories (2011)

    Science.gov Websites

    ; Gibson, Kerry; ACC0400 2011-04-01; Thin Sheet of Diamond Has Worlds of Uses; Sagoff, Jared; ACC0399 Top -03-28; Firm Uses DOE's Fastest Supercomputer to Streamline Long-Haul Trucks; ACC0391 2011-03-28

  10. Next Generation Security for the 10,240 Processor Columbia System

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)

    2005-01-01

    This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.

  11. Enabling Computational Dynamics in Distributed Computing Environments Using a Heterogeneous Computing Template

    DTIC Science & Technology

    2011-08-09

    fastest 10 supercomputers in the world. Both systems rely on GPU co-processing, one using AMD cards, the second, called Nebulae , using NVIDIA Tesla...Page 9 of 10 UNCLASSIFIED capability of almost 3 petaflop/s, the highest in TOP500, Nebulae only holds the No. 2 position on the TOP500 list of the

  12. Predicting Protein Structure Using Parallel Genetic Algorithms.

    DTIC Science & Technology

    1994-12-01

    Molecular dynamics attempts to simulate the protein folding process. However, the time steps required for this simulation are on the order of one...harmonics. These two factors have limited molecular dynamics simulations to less than a few nanoseconds (10-9 sec), even on today’s fastest supercomputers...By " Predicting rotein Structure D istribticfiar.. ................ Using Parallel Genetic Algorithms ,Avaiu " ’ •"... Dist THESIS I IGeorge H

  13. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  14. Building Columbia from the SysAdmin View

    NASA Technical Reports Server (NTRS)

    Chan, David

    2005-01-01

    Project Columbia was built at NASA Ames Research Center in partnership with SGI and Intel. Columbia consists of 20 512 processor Altix machines with 440TB of storage and achieved 51.87 TeraPlops to be ranked the second fastest on the top 500 at SuperComputing 2004. Columbia was delivered, installed and put into production in 3 months. On average, a new Columbia node was brought into production in less than a week. Columbia's configuration, installation, and future plans will be discussed.

  15. Harrison Ford Tapes Climate Change Show at Ames (Reporter Package)

    NASA Image and Video Library

    2014-04-11

    Hollywood legend Harrison Ford made a special visit to NASA's Ames Research Center to shoot an episode for a new documentary series about climate change called 'Years of Living Dangerously.' After being greeted by Center Director Pete Worden, Ford was filmed meeting with NASA climate scientists and discussed global temperature prediction data processed using one of the world's fastest supercomputers at Ames. Later he flew in the co-pilot seat in a jet used to gather data for NASA air quality studies.

  16. The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris

    2015-08-01

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less

  17. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  18. Fortran for the nineties

    NASA Technical Reports Server (NTRS)

    Himer, J. T.

    1992-01-01

    Fortran has largely enjoyed prominence for the past few decades as the computer programming language of choice for numerically intensive scientific, engineering, and process control applications. Fortran's well understood static language syntax has allowed resulting parsers and compiler optimizing technologies to often generate among the most efficient and fastest run-time executables, particularly on high-end scalar and vector supercomputers. Computing architectures and paradigms have changed considerably since the last ANSI/ISO Fortran release in 1978, and while FORTRAN 77 has more than survived, it's aged features provide only partial functionality for today's demanding computing environments. The simple block procedural languages have been necessarily evolving, or giving way, to specialized supercomputing, network resource, and object-oriented paradigms. To address these new computing demands, ANSI has worked for the last 12-years with three international public reviews to deliver Fortran 90. Fortran 90 has superseded and replaced ISO FORTRAN 77 internationally as the sole Fortran standard; while in the US, Fortran 90 is expected to be adopted as the ANSI standard this summer, coexisting with ANSI FORTRAN 77 until at least 1996. The development path and current state of Fortran will be briefly described highlighting the many new Fortran 90 syntactic and semantic additions which support (among others): free form source; array syntax; new control structures; modules and interfaces; pointers; derived data types; dynamic memory; enhanced I/O; operator overloading; data abstraction; user optional arguments; new intrinsics for array, bit manipulation, and system inquiry; and enhanced portability through better generic control of underlying system arithmetic models. Examples from dynamical astronomy, signal and image processing will attempt to illustrate Fortran 90's applicability to today's general scalar, vector, and parallel scientific and engineering requirements and object oriented programming paradigms. Time permitting, current work proceeding on the future development of Fortran 2000 and collateral standards will be introduced.

  19. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  20. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneses, Esteban; Ni, Xiang; Jones, Terry R

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of faultmore » tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.« less

  2. Discovery of 4ms and 7 MS Pulsars in M15 (F & H)

    NASA Astrophysics Data System (ADS)

    Middleditch, J.

    1992-12-01

    Observations of M15 taken during Oct. 23-Nov. 1 1991 with the Arecibo 305-m telescope at 430 MHz, which were analyzed using 2-billion point Fourier transforms on supercomputers at Los Alamos National Laboratory, reveal two new ms pulsars in the globular cluster, M15. The sixth and fastest yet discovered in this cluster, M15F, has a spin rate of 248.3 Hz, while the eighth and latest to be discovered in this cluster has a spin rate of 148.3 Hz, the only one known so far in the frequency interval of 100-200 Hz. Further details and implications of these discoveries will be discussed.

  3. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  4. Hardware accelerator for molecular dynamics: MDGRAPE-2

    NASA Astrophysics Data System (ADS)

    Susukita, Ryutaro; Ebisuzaki, Toshikazu; Elmegreen, Bruce G.; Furusawa, Hideaki; Kato, Kenya; Kawai, Atsushi; Kobayashi, Yoshinao; Koishi, Takahiro; McNiven, Geoffrey D.; Narumi, Tetsu; Yasuoka, Kenji

    2003-10-01

    We developed MDGRAPE-2, a hardware accelerator that calculates forces at high speed in molecular dynamics (MD) simulations. MDGRAPE-2 is connected to a PC or a workstation as an extension board. The sustained performance of one MDGRAPE-2 board is 15 Gflops, roughly equivalent to the peak performance of the fastest supercomputer processing element. One board is able to calculate all forces between 10 000 particles in 0.28 s (i.e. 310000 time steps per day). If 16 boards are connected to one computer and operated in parallel, this calculation speed becomes ˜10 times faster. In addition to MD, MDGRAPE-2 can be applied to gravitational N-body simulations, the vortex method and smoothed particle hydrodynamics in computational fluid dynamics.

  5. Community Detection on the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh

    We present and evaluate a new GPU algorithm based on the Louvain method for community detection. Our algorithm is the first for this problem that parallelizes the access to individual edges. In this way we can fine tune the load balance when processing networks with nodes of highly varying degrees. This is achieved by scaling the number of threads assigned to each node according to its degree. Extensive experiments show that we obtain speedups up to a factor of 270 compared to the sequential algorithm. The algorithm consistently outperforms other recent shared memory implementations and is only one order ofmore » magnitude slower than the current fastest parallel Louvain method running on a Blue Gene/Q supercomputer using more than 500K threads.« less

  6. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  7. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallarno, George; Rogers, James H; Maxwell, Don E

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less

  9. B-MIC: An Ultrafast Three-Level Parallel Sequence Aligner Using MIC.

    PubMed

    Cui, Yingbo; Liao, Xiangke; Zhu, Xiaoqian; Wang, Bingqiang; Peng, Shaoliang

    2016-03-01

    Sequence alignment is the central process for sequence analysis, where mapping raw sequencing data to reference genome. The large amount of data generated by NGS is far beyond the process capabilities of existing alignment tools. Consequently, sequence alignment becomes the bottleneck of sequence analysis. Intensive computing power is required to address this challenge. Intel recently announced the MIC coprocessor, which can provide massive computing power. The Tianhe-2 is the world's fastest supercomputer now equipped with three MIC coprocessors each compute node. A key feature of sequence alignment is that different reads are independent. Considering this property, we proposed a MIC-oriented three-level parallelization strategy to speed up BWA, a widely used sequence alignment tool, and developed our ultrafast parallel sequence aligner: B-MIC. B-MIC contains three levels of parallelization: firstly, parallelization of data IO and reads alignment by a three-stage parallel pipeline; secondly, parallelization enabled by MIC coprocessor technology; thirdly, inter-node parallelization implemented by MPI. In this paper, we demonstrate that B-MIC outperforms BWA by a combination of those techniques using Inspur NF5280M server and the Tianhe-2 supercomputer. To the best of our knowledge, B-MIC is the first sequence alignment tool to run on Intel MIC and it can achieve more than fivefold speedup over the original BWA while maintaining the alignment precision.

  10. Heuristic Scheduling in Grid Environments: Reducing the Operational Energy Demand

    NASA Astrophysics Data System (ADS)

    Bodenstein, Christian

    In a world where more and more businesses seem to trade in an online market, the supply of online services to the ever-growing demand could quickly reach its capacity limits. Online service providers may find themselves maxed out at peak operation levels during high-traffic timeslots but too little demand during low-traffic timeslots, although the latter is becoming less frequent. At this point deciding which user is allocated what level of service becomes essential. The concept of Grid computing could offer a meaningful alternative to conventional super-computing centres. Not only can Grids reach the same computing speeds as some of the fastest supercomputers, but distributed computing harbors a great energy-saving potential. When scheduling projects in such a Grid environment however, simply assigning one process to a system becomes so complex in calculation that schedules are often too late to execute, rendering their optimizations useless. Current schedulers attempt to maximize the utility, given some sort of constraint, often reverting to heuristics. This optimization often comes at the cost of environmental impact, in this case CO 2 emissions. This work proposes an alternate model of energy efficient scheduling while keeping a respectable amount of economic incentives untouched. Using this model, it is possible to reduce the total energy consumed by a Grid environment using 'just-in-time' flowtime management, paired with ranking nodes by efficiency.

  11. DALiuGE: A graph execution framework for harnessing the astronomical data deluge

    NASA Astrophysics Data System (ADS)

    Wu, C.; Tobar, R.; Vinsen, K.; Wicenec, A.; Pallot, D.; Lao, B.; Wang, R.; An, T.; Boulton, M.; Cooper, I.; Dodson, R.; Dolensky, M.; Mei, Y.; Wang, F.

    2017-07-01

    The Data Activated Liu Graph Engine - DALiuGE- is an execution framework for processing large astronomical datasets at a scale required by the Square Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex data reduction pipelines consisting of both datasets and algorithmic components and an implementation run-time to execute such pipelines on distributed resources. By mapping the logical view of a pipeline to its physical realisation, DALiuGE separates the concerns of multiple stakeholders, allowing them to collectively optimise large-scale data processing solutions in a coherent manner. The execution in DALiuGE is data-activated, where each individual data item autonomously triggers the processing on itself. Such decentralisation also makes the execution framework very scalable and flexible, supporting pipeline sizes ranging from less than ten tasks running on a laptop to tens of millions of concurrent tasks on the second fastest supercomputer in the world. DALiuGE has been used in production for reducing interferometry datasets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide Spectral Radioheliograph; and is being developed as the execution framework prototype for the Science Data Processor (SDP) consortium of the Square Kilometre Array (SKA) telescope. This paper presents a technical overview of DALiuGE and discusses case studies from the CHILES and MUSER projects that use DALiuGE to execute production pipelines. In a companion paper, we provide in-depth analysis of DALiuGE's scalability to very large numbers of tasks on two supercomputing facilities.

  12. Multibillion-atom Molecular Dynamics Simulations of Plasticity, Spall, and Ejecta

    NASA Astrophysics Data System (ADS)

    Germann, Timothy C.

    2007-06-01

    Modern supercomputing platforms, such as the IBM BlueGene/L at Lawrence Livermore National Laboratory and the Roadrunner hybrid supercomputer being built at Los Alamos National Laboratory, are enabling large-scale classical molecular dynamics simulations of phenomena that were unthinkable just a few years ago. Using either the embedded atom method (EAM) description of simple (close-packed) metals, or modified EAM (MEAM) models of more complex solids and alloys with mixed covalent and metallic character, simulations containing billions to trillions of atoms are now practical, reaching volumes in excess of a cubic micron. In order to obtain any new physical insights, however, it is equally important that the analysis of such systems be tractable. This is in fact possible, in large part due to our highly efficient parallel visualization code, which enables the rendering of atomic spheres, Eulerian cells, and other geometric objects in a matter of minutes, even for tens of thousands of processors and billions of atoms. After briefly describing the BlueGene/L and Roadrunner architectures, and the code optimization strategies that were employed, results obtained thus far on BlueGene/L will be reviewed, including: (1) shock compression and release of a defective EAM Cu sample, illustrating the plastic deformation accompanying void collapse as well as the subsequent void growth and linkup upon release; (2) solid-solid martensitic phase transition in shock-compressed MEAM Ga; and (3) Rayleigh-Taylor fluid instability modeled using large-scale direct simulation Monte Carlo (DSMC) simulations. I will also describe our initial experiences utilizing Cell Broadband Engine processors (developed for the Sony PlayStation 3), and planned simulation studies of ejecta and spall failure in polycrystalline metals that will be carried out when the full Petaflop Opteron/Cell Roadrunner supercomputer is assembled in mid-2008.

  13. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Song, Y T; Chao, Y

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less

  14. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  15. Ada compiler validation summary report. Certificate number: 891116W1. 10191. Intel Corporation, IPSC/2 Ada, Release 1. 1, IPSC/2 parallel supercomputer, system resource manager host and IPSC/2 parallel supercomputer, CX-1 nodes target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-11-16

    This VSR documents the results of the validation testing performed on an Ada compiler. Testing was carried out for the following purposes: To attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; To attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; and To determine that the implementation-dependent behavior is allowed by the Ada Standard. Testing of this compiler was conducted by SofTech, Inc. under the direction of he AVF according to procedures established by the Ada Joint Program Office and administered bymore » the Ada Validation Organization (AVO). On-side testing was completed 16 November 1989 at Aloha OR.« less

  16. Superconductor Digital Electronics: -- Current Status, Future Prospects

    NASA Astrophysics Data System (ADS)

    Mukhanov, Oleg

    2011-03-01

    Two major applications of superconductor electronics: communications and supercomputing will be presented. These areas hold a significant promise of a large impact on electronics state-of-the-art for the defense and commercial markets stemming from the fundamental advantages of superconductivity: simultaneous high speed and low power, lossless interconnect, natural quantization, and high sensitivity. The availability of relatively small cryocoolers lowered the foremost market barrier for cryogenically-cooled superconductor electronic systems. These fundamental advantages enabled a novel Digital-RF architecture - a disruptive technological approach changing wireless communications, radar, and surveillance system architectures dramatically. Practical results were achieved for Digital-RF systems in which wide-band, multi-band radio frequency signals are directly digitized and digital domain is expanded throughout the entire system. Digital-RF systems combine digital and mixed signal integrated circuits based on Rapid Single Flux Quantum (RSFQ) technology, superconductor analog filter circuits, and semiconductor post-processing circuits. The demonstrated cryocooled Digital-RF systems are the world's first and fastest directly digitizing receivers operating with live satellite signals, enabling multi-net data links, and performing signal acquisition from HF to L-band with 30 GHz clock frequencies. In supercomputing, superconductivity leads to the highest energy efficiencies per operation. Superconductor technology based on manipulation and ballistic transfer of magnetic flux quanta provides a superior low-power alternative to CMOS and other charge-transfer based device technologies. The fundamental energy consumption in SFQ circuits defined by flux quanta energy 2 x 10-19 J. Recently, a novel energy-efficient zero-static-power SFQ technology, eSFQ/ERSFQ was invented, which retains all advantages of standard RSFQ circuits: high-speed, dc power, internal memory. The voltage bias regulation, determined by SFQ clock, enables the zero-power at zero-activity regimes, indispensable for sensor and quantum bit readout.

  17. Formulation and characterization of a compacted multiparticulate system for modified release of water-soluble drugs--Part II theophylline and cimetidine.

    PubMed

    Cantor, Stuart L; Hoag, Stephen W; Augsburger, Larry L

    2009-05-01

    The purpose was to investigate the effectiveness of an ethylcellulose (EC) bead matrix and different film-coating polymers in delaying drug release from compacted multiparticulate systems. Formulations containing theophylline or cimetidine granulated with Eudragit RS 30D were developed and beads were produced by extrusion-spheronization. Drug beads were coated using 15% wt/wt Surelease or Eudragit NE 30D and were evaluated for true density, particle size, and sphericity. Lipid-based placebo beads and drug beads were blended together and compacted on an instrumented Stokes B2 rotary tablet press. Although placebo beads were significantly less spherical, their true density of 1.21 g/cm(3) and size of 855 mum were quite close to Surelease-coated drug beads. Curing improved the crushing strength and friability values for theophylline tablets containing Surelease-coated beads; 5.7 +/- 1.0 kP and 0.26 +/- 0.07%, respectively. Dissolution profiles showed that the EC matrix only provided 3 h of drug release. Although tablets containing Surelease-coated theophylline beads released drug fastest overall (t(44.2%) = 8 h), profiles showed that coating damage was still minimal. Size and density differences indicated a minimal segregation potential during tableting for blends containing Surelease-coated drug beads. Although modified release profiles >8 h were achievable in tablets for both drugs using either coating polymer, Surelease-coated theophylline beads released drug fastest overall. This is likely because of the increased solubility of theophylline and the intrinsic properties of the Surelease films. Furthermore, the lipid-based placebos served as effective cushioning agents by protecting coating integrity of drug beads under a number of different conditions while tableting.

  18. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    NASA Astrophysics Data System (ADS)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  19. Understanding the I/O Performance Gap Between Cori KNL and Haswell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jialin; Koziol, Quincey; Tang, Houjun

    2017-05-01

    The Cori system at NERSC has two compute partitions with different CPU architectures: a 2,004 node Haswell partition and a 9,688 node KNL partition, which ranked as the 5th most powerful and fastest supercomputer on the November 2016 Top 500 list. The compute partitions share a common storage configuration, and understanding the IO performance gap between them is important, impacting not only to NERSC/LBNL users and other national labs, but also to the relevant hardware vendors and software developers. In this paper, we have analyzed performance of single core and single node IO comprehensively on the Haswell and KNL partitions,more » and have discovered the major bottlenecks, which include CPU frequencies and memory copy performance. We have also extended our performance tests to multi-node IO and revealed the IO cost difference caused by network latency, buffer size, and communication cost. Overall, we have developed a strong understanding of the IO gap between Haswell and KNL nodes and the lessons learned from this exploration will guide us in designing optimal IO solutions in many-core era.« less

  20. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  1. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  2. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  3. Optimization of Supercomputer Use on EADS II System

    NASA Technical Reports Server (NTRS)

    Ahmed, Ardsher

    1998-01-01

    The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.

  4. Supercomputer applications in molecular modeling.

    PubMed

    Gund, T M

    1988-01-01

    An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.

  5. Eliminating blister rust cankers from sugar pine by pruning.

    Treesearch

    G. L. Hayes; William I. Stein

    1957-01-01

    Well-stocked patches of vigorous advance reproduction are found in many deteriorating old-growth stands in southwestern Oregon. If carefully released from the over story, this reproduction can shorten the rotation length of the next crop by many years. Often sugar pine is the fastest-growing component of the reproduction, but it is frequently infected with blister rust...

  6. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  7. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  8. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  9. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  10. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  11. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  12. Data-intensive computing on numerically-insensitive supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Fasel, Patricia K; Habib, Salman

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  13. Computer Electromagnetics and Supercomputer Architecture

    NASA Technical Reports Server (NTRS)

    Cwik, Tom

    1993-01-01

    The dramatic increase in performance over the last decade for microporcessor computations is compared with that for the supercomputer computations. This performance, the projected performance, and a number of other issues such as cost and the inherent pysical limitations in curent supercomputer technology have naturally led to parallel supercomputers and ensemble of interconnected microprocessors.

  14. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  15. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2018-01-16

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  16. Status report of the end-to-end ASKAP software system: towards early science operations

    NASA Astrophysics Data System (ADS)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.

  17. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  18. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  19. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  20. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  1. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  2. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  3. Automotive applications of superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginsberg, M.

    1987-01-01

    These proceedings compile papers on supercomputers in the automobile industry. Titles include: An automotive engineer's guide to the effective use of scalar, vector, and parallel computers; fluid mechanics, finite elements, and supercomputers; and Automotive crashworthiness performance on a supercomputer.

  4. Active food packaging based on molecularly imprinted polymers: study of the release kinetics of ferulic acid.

    PubMed

    Otero-Pazos, Pablo; Rodríguez-Bernaldo de Quirós, Ana; Sendón, Raquel; Benito-Peña, Elena; González-Vallejo, Victoria; Moreno-Bondi, M Cruz; Angulo, Immaculada; Paseiro-Losada, Perfecto

    2014-11-19

    A novel active packaging based on molecularly imprinted polymer (MIP) was developed for the controlled release of ferulic acid. The release kinetics of ferulic acid from the active system to food simulants (10, 20, and 50% ethanol (v/v), 3% acetic acid (w/v), and vegetable oil), substitutes (95% ethanol (v/v) and isooctane), and real food samples at different temperatures were studied. The key parameters of the diffusion process were calculated by using a mathematical modeling based on Fick's second law. The ferulic acid release was affected by the temperature as well as the percentage of ethanol of the simulant. The fastest release occurred in 95% ethanol (v/v) at 20 °C. The diffusion coefficients (D) obtained ranged between 1.8 × 10(-11) and 4.2 × 10(-9) cm(2)/s. A very good correlation between experimental and estimated data was obtained, and consequently the model could be used to predict the release of ferulic acid into food simulants and real food samples.

  5. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  6. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  7. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  8. Improved Access to Supercomputers Boosts Chemical Applications.

    ERIC Educational Resources Information Center

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  9. Scientific Visualization in High Speed Network Environments

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kutler, Paul (Technical Monitor)

    1997-01-01

    In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.

  10. Towards Efficient Supercomputing: Searching for the Right Efficiency Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W

    2012-01-01

    The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has beenmore » and how it stands today with respect to efficient supercomputing.« less

  11. Fastest Rotating Star Found in Neighboring Galaxy

    NASA Image and Video Library

    2017-12-08

    NASA image release December 5, 2011 This is an artist's concept of the fastest rotating star found to date. The massive, bright young star, called VFTS 102, rotates at a million miles per hour, or 100 times faster than our Sun does. Centrifugal forces from this dizzying spin rate have flattened the star into an oblate shape and spun off a disk of hot plasma, seen edge on in this view from a hypothetical planet. The star may have "spun up" by accreting material from a binary companion star. The rapidly evolving companion later exploded as a supernova. The whirling star lies 160,000 light-years away in the Large Magellanic Cloud, a satellite galaxy of our Milky Way. The team will use NASA's Hubble Space Telescope to make precise measurements of the star's proper motion across space. To read more go to: hubblesite.org/newscenter/archive/releases/2011/39/full/ Image Type: Artwork Credit: NASA, ESA, and G. Bacon (STScI) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  13. Keeping an Eye on the Prize

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A U

    2007-02-06

    Setting performance goals is part of the business plan for almost every company. The same is true in the world of supercomputers. Ten years ago, the Department of Energy (DOE) launched the Accelerated Strategic Computing Initiative (ASCI) to help ensure the safety and reliability of the nation's nuclear weapons stockpile without nuclear testing. ASCI, which is now called the Advanced Simulation and Computing (ASC) Program and is managed by DOE's National Nuclear Security Administration (NNSA), set an initial 10-year goal to obtain computers that could process up to 100 trillion floating-point operations per second (teraflops). Many computer experts thought themore » goal was overly ambitious, but the program's results have proved them wrong. Last November, a Livermore-IBM team received the 2005 Gordon Bell Prize for achieving more than 100 teraflops while modeling the pressure-induced solidification of molten metal. The prestigious prize, which is named for a founding father of supercomputing, is awarded each year at the Supercomputing Conference to innovators who advance high-performance computing. Recipients for the 2005 prize included six Livermore scientists--physicists Fred Streitz, James Glosli, and Mehul Patel and computer scientists Bor Chan, Robert Yates, and Bronis de Supinski--as well as IBM researchers James Sexton and John Gunnels. This team produced the first atomic-scale model of metal solidification from the liquid phase with results that were independent of system size. The record-setting calculation used Livermore's domain decomposition molecular-dynamics (ddcMD) code running on BlueGene/L, a supercomputer developed by IBM in partnership with the ASC Program. BlueGene/L reached 280.6 teraflops on the Linpack benchmark, the industry standard used to measure computing speed. As a result, it ranks first on the list of Top500 Supercomputer Sites released in November 2005. To evaluate the performance of nuclear weapons systems, scientists must understand how materials behave under extreme conditions. Because experiments at high pressures and temperatures are often difficult or impossible to conduct, scientists rely on computer models that have been validated with obtainable data. Of particular interest to weapons scientists is the solidification of metals. ''To predict the performance of aging nuclear weapons, we need detailed information on a material's phase transitions'', says Streitz, who leads the Livermore-IBM team. For example, scientists want to know what happens to a metal as it changes from molten liquid to a solid and how that transition affects the material's characteristics, such as its strength.« less

  14. Supercomputer networking for space science applications

    NASA Technical Reports Server (NTRS)

    Edelson, B. I.

    1992-01-01

    The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.

  15. Most Social Scientists Shun Free Use of Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  16. A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery

    NASA Technical Reports Server (NTRS)

    Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.

    2000-01-01

    The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.

  17. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  18. Distributed user services for supercomputers

    NASA Technical Reports Server (NTRS)

    Sowizral, Henry A.

    1989-01-01

    User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.

  19. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  20. Full speed ahead for software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, A.

    1986-03-10

    Supercomputing software is moving into high gear, spurred by the rapid spread of supercomputers into new applications. The critical challenge is how to develop tools that will make it easier for programmers to write applications that take advantage of vectorizing in the classical supercomputer and the parallelism that is emerging in supercomputers and minisupercomputers. Writing parallel software is a challenge that every programmer must face because parallel architectures are springing up across the range of computing. Cray is developing a host of tools for programmers. Tools to support multitasking (in supercomputer parlance, multitasking means dividing up a single program tomore » run on multiple processors) are high on Cray's agenda. On tap for multitasking is Premult, dubbed a microtasking tool. As a preprocessor for Cray's CFT77 FORTRAN compiler, Premult will provide fine-grain multitasking.« less

  1. Will Moores law be sufficient?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik P.

    2004-07-01

    It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moore's Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPSmore » (1 Zettaflops) . The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moore's Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moore's Law will be very high by today's standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops), depending on architecture, but distinctly below the level required for the most ambitious applications. Having established that Moore's Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moore's Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a 'multi-architecture' for supercomputers that combines a microprocessor with other 'advanced' concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the 'advanced' features would provide a boost to the limits of performance.« less

  2. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  3. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  4. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  5. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  6. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  7. Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-02-01

    Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.

  8. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  9. Release of chemical permeation enhancers from drug-in-adhesive transdermal patches.

    PubMed

    Qvist, Michael H; Hoeck, Ulla; Kreilgaard, Bo; Madsen, Flemming; Frokjaer, Sven

    2002-01-14

    There is only limited knowledge of how chemical permeation enhancers release from transdermal drug delivery systems of the drug-in-adhesive type. In this study, the release of eight commonly known enhancers from eight types of polymer adhesives was evaluated using Franz diffusion cells. It was shown that all the enhancers released completely from the adhesives and followed a square root of time kinetic (Higuchi law). Using a statistical analysis it was shown that the release rate was more dependent on the type of enhancer than on the type of polymers. The mean release rates were in the range from 2.2 to 11.1%/ radical t for the slowest and fastest releasing enhancers, which correspond to a 50% release within 500 and 20 min, respectively. Furthermore, the release rates were inversely proportional to the cube root of the molal volumes of the enhancers and to their logarithmic partition coefficients between the polymer adhesive and the receptor fluid. It was found that the observed release rates were probably due to a high diffusion coefficient of the enhancers rather than due to an inhomogeneous embedment of the enhancers in the adhesives. The type of adhesive showed minor influence on the release rate, especially among the acrylic polymers no difference was seen. However, compared to the acrylic adhesives, the polyisobutylene adhesive showed slower release rates, while the silicone adhesive showed slightly faster release rates.

  10. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  11. Optimizing the inner loop of the gravitational force interaction on modern processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Michael S

    2010-12-08

    We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less

  12. Climate@Home: Crowdsourcing Climate Change Research

    NASA Astrophysics Data System (ADS)

    Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.

    2011-12-01

    Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.

  13. Prospects for Boiling of Subcooled Dielectric Liquids for Supercomputer Cooling

    NASA Astrophysics Data System (ADS)

    Zeigarnik, Yu. A.; Vasil'ev, N. V.; Druzhinin, E. A.; Kalmykov, I. V.; Kosoi, A. S.; Khodakov, K. A.

    2018-02-01

    It is shown experimentally that using forced-convection boiling of dielectric coolants of the Novec 649 Refrigerant subcooled relative to the saturation temperature makes possible removing heat flow rates up to 100 W/cm2 from modern supercomputer chip interface. This fact creates prerequisites for the application of dielectric liquids in cooling systems of modern supercomputers with increased requirements for their operating reliability.

  14. National Test Facility civilian agency use of supercomputers not feasible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-01

    Based on interviews with civilian agencies cited in the House report (DOE, DoEd, HHS, FEMA, NOAA), none would be able to make effective use of NTF`s excess supercomputing capabilities. These agencies stated they could not use the resources primarily because (1) NTF`s supercomputers are older machines whose performance and costs cannot match those of more advanced computers available from other sources and (2) some agencies have not yet developed applications requiring supercomputer capabilities or do not have funding to support such activities. In addition, future support for the hardware and software at NTF is uncertain, making any investment by anmore » outside user risky.« less

  15. Kriging for Spatial-Temporal Data on the Bridges Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, E. M.

    2017-12-01

    Currently, kriging of spatial-temporal data is slow and limited to relatively small vector sizes. We have developed a method on the Bridges supercomputer, at the Pittsburgh supercomputer center, which uses a combination of the tools R, Fortran, the Message Passage Interface (MPI), OpenACC, and special R packages for big data. This combination of tools now permits us to complete tasks which could previously not be completed, or takes literally hours to complete. We ran simulation studies from a laptop against the supercomputer. We also look at "real world" data sets, such as the Irish wind data, and some weather data. We compare the timings. We note that the timings are suprising good.

  16. Multiple DNA and protein sequence alignment on a workstation and a supercomputer.

    PubMed

    Tajima, K

    1988-11-01

    This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.

  17. Supercomputing in Aerospace

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Yee, Helen

    1987-01-01

    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.

  18. The study on the entrapment efficiency and in vitro release of puerarin submicron emulsion.

    PubMed

    Yue, Peng-Fei; Lu, Xiu-Yun; Zhang, Zeng-Zhu; Yuan, Hai-Long; Zhu, Wei-Feng; Zheng, Qin; Yang, Ming

    2009-01-01

    The entrapment efficiency (EE) and release in vitro are very important physicochemical characteristics of puerarin submicron emulsion (SME). In this paper, the performance of ultrafiltration (UF), ultracentrifugation (UC), and microdialysis (MD) for determining the EE of SME were evaluated, respectively. The release study in vitro of puerarin from SME was studied by using MD and pressure UF technology. The EE of SME was 86.5%, 72.8%, and 55.8% as determined by MD, UF, and UC, respectively. MD was not suitable for EE measurements of puerarin submicron oil droplet, which could only determine the total EE of submicron oil droplet and liposomes micelles, but it could be applied to determine the amount of free drug in SMEs. Although UC was the fastest and simplest to use, its results were the least reliable. UF was still the relatively accurate method for EE determination of puerarin SME. The release of puerarin SME could be evaluated by using MD and pressure UF, but MD seemed to be more suitable for the release study of puerarin emulsion. The drug release from puerarin SME at three drug concentrations was initially rapid, but reached a plateau value within 30 min. Drug release of puerarin from the SME occurred via burst release.

  19. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  20. NAS technical summaries: Numerical aerodynamic simulation program, March 1991 - February 1992

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefiting other supercomputer centers in Government and industry. This report contains selected scientific results from the 1991-92 NAS Operational Year, March 4, 1991 to March 3, 1992, which is the fifth year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP. The Cray-2, the first generation supercomputer, has four processors, 256 megawords of central memory, and a total sustained speed of 250 million floating point operations per second. The Cray Y-MP, the second generation supercomputer, has eight processors and a total sustained speed of one billion floating point operations per second. Additional memory was installed this year, doubling capacity from 128 to 256 megawords of solid-state storage-device memory. Because of its higher performance, the Cray Y-MP delivered approximately 77 percent of the total number of supercomputer hours used during this year.

  1. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    NASA Astrophysics Data System (ADS)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  2. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  3. Desktop supercomputer: what can it do?

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  4. Picosecond activation of the DEACM photocage unravelled by VIS-pump-IR-probe spectroscopy.

    PubMed

    van Wilderen, L J G W; Neumann, C; Rodrigues-Correia, A; Kern-Michler, D; Mielke, N; Reinfelds, M; Heckel, A; Bredenbeck, J

    2017-03-01

    The light-induced ultrafast uncaging process of the [7-(diethylamino)coumarin-4-yl]methyl (DEACM) cage is measured by time-resolved visible-pump-infrared-probe spectroscopy, and supported by steady-state absorption spectroscopy in the visible and infrared spectral regions. Understanding the uncaging process is important because its favorable properties make DEACM an interesting case for chemical and biological applications. It has a convenient absorption in the visible spectral range, and is relatively easily modified to carry leaving groups (LGs) such as nucleotides, substrates or inhibitors, which are inactive when bound and active when released. Previous work suggested a lower limit for the uncaging rate, which places it among the fastest available cages. Here, we determine the photodissociation directly to occur on the picosecond time scale by monitoring the appearance of the released LG in the infrared spectral region. In the present study, azide (N 3 ) is chosen as an LG to monitor photodissociation because its vibrational mode is spectrally isolated (hence easy to follow) and its absorption wavenumber is sensitive to local structural rearrangements. The uncaging process is recorded up to 3 nanoseconds and compared to the collected steady-state spectra. The free LG appears on a picosecond time scale, rendering this one of the fastest known cages. No evidence is found for a tight-ion pair (TIP) preceding the free LG. The uncaging mechanism is found to be slowed down upon the addition of water to acetonitrile.

  5. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.

  6. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  7. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  8. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  9. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  10. Redox-sensitive shell-crosslinked polypeptide-block-polysaccharide micelles for efficient intracellular anticancer drug delivery.

    PubMed

    Zhang, Aiping; Zhang, Zhe; Shi, Fenghua; Xiao, Chunsheng; Ding, Jianxun; Zhuang, Xiuli; He, Chaoliang; Chen, Li; Chen, Xuesi

    2013-09-01

    Redox-responsive SCMs based on amphiphilic PBLG-b-dextran with good biocompatibility are synthesized and used for efficient intracellular drug delivery. The molecular structures and SCMs characteristics are characterized by (1) H NMR, FT-IR, TEM, and DLS. The hydrodynamic radius of SCMs increases gradually in PBS due to the cleavage of disulfide bond in micellar shell caused by the presence of GSH. The encapsulation efficiency and release kinetics of DOX are investigated. The fastest DOX release is observed under intracellular-mimicking reductive environments. An MTT assay demonstrates that DOX-loaded SCMs show higher cellular proliferation inhibition against GSH-OEt pretreated HeLa and HepG2 than that of the non-pretreated and BSO-pretreated ones. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. NSF Commits to Supercomputers.

    ERIC Educational Resources Information Center

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  12. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  13. Adventures in Computational Grids

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Sometimes one supercomputer is not enough. Or your local supercomputers are busy, or not configured for your job. Or you don't have any supercomputers. You might be trying to simulate worldwide weather changes in real time, requiring more compute power than you could get from any one machine. Or you might be collecting microbiological samples on an island, and need to examine them with a special microscope located on the other side of the continent. These are the times when you need a computational grid.

  14. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  15. Breakthrough: NETL's Simulation-Based Engineering User Center (SBEUC)

    ScienceCinema

    Guenther, Chris

    2018-05-23

    The National Energy Technology Laboratory relies on supercomputers to develop many novel ideas that become tomorrow's energy solutions. Supercomputers provide a cost-effective, efficient platform for research and usher technologies into widespread use faster to bring benefits to the nation. In 2013, Secretary of Energy Dr. Ernest Moniz dedicated NETL's new supercomputer, the Simulation Based Engineering User Center, or SBEUC. The SBEUC is dedicated to fossil energy research and is a collaborative tool for all of NETL and our regional university partners.

  16. A high level language for a high performance computer

    NASA Technical Reports Server (NTRS)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  17. Technology advances and market forces: Their impact on high performance architectures

    NASA Technical Reports Server (NTRS)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  18. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  19. Breakthrough: NETL's Simulation-Based Engineering User Center (SBEUC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guenther, Chris

    The National Energy Technology Laboratory relies on supercomputers to develop many novel ideas that become tomorrow's energy solutions. Supercomputers provide a cost-effective, efficient platform for research and usher technologies into widespread use faster to bring benefits to the nation. In 2013, Secretary of Energy Dr. Ernest Moniz dedicated NETL's new supercomputer, the Simulation Based Engineering User Center, or SBEUC. The SBEUC is dedicated to fossil energy research and is a collaborative tool for all of NETL and our regional university partners.

  20. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  1. Tracing Scientific Facilities through the Research Literature Using Persistent Identifiers

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.; Maull, K. E.

    2016-12-01

    Tracing persistent identifiers to their source publications is an easy task when authors use them, since it is a simple matter of matching the persistent identifier to the specific text string of the identifier. However, trying to understand if a publication uses the resource behind an identifier when such identifier is not referenced explicitly is a harder task. In this research, we explore the effectiveness of alternative strategies of associating publications with uses of the resource referenced by an identifier when it may not be explicit. This project is explored within the context of the NCAR supercomputer, where we are broadly interesting in the science that can be traced to the usage of the NCAR supercomputing facility, by way of the peer-reviewed research publications that utilize and reference it. In this project we explore several ways of drawing linkages between publications and the NCAR supercomputing resources. Identifying and compiling peer-reviewed publications related to NCAR supercomputer usage are explored via three sources: 1) User-supplied publications gathered through a community survey, 2) publications that were identified via manual searching of the Google scholar search index, and 3) publications associated with National Science Foundation (NSF) grants extracted from a public NSF database. These three sources represent three styles of collecting information about publications that likely imply usage of the NCAR supercomputing facilities. Each source has strengths and weaknesses, thus our discussion will explore how our publication identification and analysis methods vary in terms of accuracy, reliability, and effort. We will also discuss strategies for enabling more efficient tracing of research impacts of supercomputing facilities going forward through the assignment of a persistent web identifier to the NCAR supercomputer. While this solution has potential to greatly enhance our ability to trace the use of the facility through publications, authors must cite the facility consistently. It is therefore necessary to provide recommendations for citation and attribution behavior, and we will conclude our discussion with how such recommendations have improved tracing the supercomputer facility allowing for more consistent and widespread measurement of its impact.

  2. New insights on poly(vinyl acetate)-based coated floating tablets: characterisation of hydration and CO2 generation by benchtop MRI and its relation to drug release and floating strength.

    PubMed

    Strübing, Sandra; Abboud, Tâmara; Contri, Renata Vidor; Metz, Hendrik; Mäder, Karsten

    2008-06-01

    The purpose of this study was to investigate the mechanism of floating and drug release behaviour of poly(vinyl acetate)-based floating tablets with membrane controlled drug delivery. Propranolol HCl containing tablets with Kollidon SR as an excipient for direct compression and different Kollicoat SR 30 D/Kollicoat IR coats varying from 10 to 20mg polymer/cm2 were investigated regarding drug release in 0.1N HCl. Furthermore, the onset of floating, the floating duration and the floating strength of the device were determined. In addition, benchtop MRI studies of selected samples were performed. Coated tablets with 10mg polymer/cm2 SR/IR, 8.5:1.5 coat exhibited the shortest lag times prior to drug release and floating onset, the fastest increase in and highest maximum values of floating strength. The drug release was delayed efficiently within a time interval of 24 h by showing linear drug release characteristics. Poly(vinyl acetate) proved to be an appropriate excipient to ensure safe and reliable drug release. Floating strength measurements offered the possibility to quantify the floating ability of the developed systems and thus to compare different formulations more efficiently. Benchtop MRI studies allowed a deeper insight into drug release and floating mechanisms noninvasively and continuously.

  3. Energy Efficient Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  4. Energy Efficient Supercomputing

    ScienceCinema

    Anypas, Katie

    2018-05-07

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  5. Job Management Requirements for NAS Parallel Systems and Clusters

    NASA Technical Reports Server (NTRS)

    Saphir, William; Tanner, Leigh Ann; Traversat, Bernard

    1995-01-01

    A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.

  6. Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters

    NASA Astrophysics Data System (ADS)

    Esler, Kenneth

    2011-03-01

    Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.

  7. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchett, John M; Ahrens, James P; Lo, Li - Ta

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less

  8. Supercomputing Drives Innovation - Continuum Magazine | NREL

    Science.gov Websites

    years, NREL scientists have used supercomputers to simulate 3D models of the primary enzymes and Scientist, discuss a 3D model of wind plant aerodynamics, showing low velocity wakes and impact on

  9. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  10. A mass storage system for supercomputers based on Unix

    NASA Technical Reports Server (NTRS)

    Richards, J.; Kummell, T.; Zarlengo, D. G.

    1988-01-01

    The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.

  11. Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.

    PubMed

    Berger, S B; Reis, D J

    1995-02-01

    We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.

  12. Intelligent supercomputers: the Japanese computer sputnik

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  13. Introducing Mira, Argonne's Next-Generation Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  14. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2017-12-09

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  15. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  16. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  17. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  18. Supercomputers Join the Fight against Cancer – U.S. Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.

  19. NAS-current status and future plans

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.

    1987-01-01

    The Numerical Aerodynamic Simulation (NAS) has met its first major milestone, the NAS Processing System Network (NPSN) Initial Operating Configuration (IOC). The program has met its goal of providing a national supercomputer facility capable of greatly enhancing the Nation's research and development efforts. Furthermore, the program is fulfilling its pathfinder role by defining and implementing a paradigm for supercomputing system environments. The IOC is only the begining and the NAS Program will aggressively continue to develop and implement emerging supercomputer, communications, storage, and software technologies to strengthen computations as a critical element in supporting the Nation's leadership role in aeronautics.

  20. CRAY mini manual. Revision D

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    This document briefly describes the use of the CRAY supercomputers that are an integral part of the Supercomputing Network Subsystem of the Central Scientific Computing Complex at LaRC. Features of the CRAY supercomputers are covered, including: FORTRAN, C, PASCAL, architectures of the CRAY-2 and CRAY Y-MP, the CRAY UNICOS environment, batch job submittal, debugging, performance analysis, parallel processing, utilities unique to CRAY, and documentation. The document is intended for all CRAY users as a ready reference to frequently asked questions and to more detailed information contained in the vendor manuals. It is appropriate for both the novice and the experienced user.

  1. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  2. A Biomechanical Comparison of the Long Snap in Football Between High School and University Football Players.

    PubMed

    Chizewski, Michael G; Alexander, Marion J L

    2015-08-01

    Limited previous research was located that examined the technique of the long snap in football. The purpose of the study was to compare the joint movements, joint velocities, and body positions used to perform fast and accurate long snaps in high school (HS) and university (UNI) athletes. Ten HS and 10 UNI subjects were recruited for filming, each performing 10 snaps at a target with the fastest and most accurate trial being selected for subject analysis. Eighty-three variables were measured using Dartfish Team Pro 4.5.2 video analysis software, with statistical analysis performed using Microsoft Excel and SPSS 16.0. Several significant comparisons to long snapping technique between groups were noted during analysis; however, the body position and movement variables at release showed the greatest number of significant differences. The UNI athletes demonstrated significantly higher release velocity and left elbow extension velocity, with significantly lower release height and release angle than the HS group. Total snap time (release time + total flight time) was determined to have the strongest correlation to release velocity for the HS group (r = -0.915) and UNI group (r = -0.918). The study suggests HS long snappers may benefit from less elbow flexion and more knee flexion in the backswing (set position) to increase release velocity. University long snappers may benefit from increased left elbow extension range of motion during force production and decreased shoulder flexion at critical instant to increase long snap release velocity.

  3. Roadrunner Supercomputer Breaks the Petaflop Barrier

    ScienceCinema

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2017-12-09

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  4. QCD on the BlueGene/L Supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  5. Supercomputer Issues from a University Perspective.

    ERIC Educational Resources Information Center

    Beering, Steven C.

    1984-01-01

    Discusses issues related to the access of and training of university researchers in using supercomputers, considering National Science Foundation's (NSF) role in this area, microcomputers on campuses, and the limited use of existing telecommunication networks. Includes examples of potential scientific projects (by subject area) utilizing…

  6. In vitro-in vivo correlation for nevirapine extended release tablets.

    PubMed

    Macha, Sreeraj; Yong, Chan-Loi; Darrington, Todd; Davis, Mark S; MacGregor, Thomas R; Castles, Mark; Krill, Steven L

    2009-12-01

    An in vitro-in vivo correlation (IVIVC) for four nevirapine extended release tablets with varying polymer contents was developed. The pharmacokinetics of extended release formulations were assessed in a parallel group study with healthy volunteers and compared with corresponding in vitro dissolution data obtained using a USP apparatus type 1. In vitro samples were analysed using HPLC with UV detection and in vivo samples were analysed using a HPLC-MS/MS assay; the IVIVC analyses comparing the two results were performed using WinNonlin. A Double Weibull model optimally fits the in vitro data. A unit impulse response (UIR) was assessed using the fastest ER formulation as a reference. The deconvolution of the in vivo concentration time data was performed using the UIR to estimate an in vivo drug release profile. A linear model with a time-scaling factor clarified the relationship between in vitro and in vivo data. The predictability of the final model was consistent based on internal validation. Average percent prediction errors for pharmacokinetic parameters were <10% and individual values for all formulations were <15%. Therefore, a Level A IVIVC was developed and validated for nevirapine extended release formulations providing robust predictions of in vivo profiles based on in vitro dissolution profiles. Copyright 2009 John Wiley & Sons, Ltd.

  7. Cs diffusion in SiC high-energy grain boundaries

    NASA Astrophysics Data System (ADS)

    Ko, Hyunseok; Szlufarska, Izabela; Morgan, Dane

    2017-09-01

    Cesium (Cs) is a radioactive fission product whose release is of concern for Tristructural-Isotropic fuel particles. In this work, Cs diffusion through high energy grain boundaries (HEGBs) of cubic-SiC is studied using an ab-initio based kinetic Monte Carlo (kMC) model. The HEGB environment was modeled as an amorphous SiC, and Cs defect energies were calculated using the density functional theory (DFT). From defect energies, it was suggested that the fastest diffusion mechanism is the diffusion of Cs interstitial in an amorphous SiC. The diffusion of Cs interstitial was simulated using a kMC model, based on the site and transition state energies sampled from the DFT. The Cs HEGB diffusion exhibited an Arrhenius type diffusion in the range of 1200-1600 °C. The comparison between HEGB results and the other studies suggests not only that the GB diffusion dominates the bulk diffusion but also that the HEGB is one of the fastest grain boundary paths for the Cs diffusion. The diffusion coefficients in HEGB are clearly a few orders of magnitude lower than the reported diffusion coefficients from in- and out-of-pile samples, suggesting that other contributions are responsible, such as radiation enhanced diffusion.

  8. LUMA: A many-core, Fluid-Structure Interaction solver based on the Lattice-Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Harwood, Adrian R. G.; O'Connor, Joseph; Sanchez Muñoz, Jonathan; Camps Santasmasas, Marta; Revell, Alistair J.

    2018-01-01

    The Lattice-Boltzmann Method at the University of Manchester (LUMA) project was commissioned to build a collaborative research environment in which researchers of all abilities can study fluid-structure interaction (FSI) problems in engineering applications from aerodynamics to medicine. It is built on the principles of accessibility, simplicity and flexibility. The LUMA software at the core of the project is a capable FSI solver with turbulence modelling and many-core scalability as well as a wealth of input/output and pre- and post-processing facilities. The software has been validated and several major releases benchmarked on supercomputing facilities internationally. The software architecture is modular and arranged logically using a minimal amount of object-orientation to maintain a simple and accessible software.

  9. A Performance Evaluation of the Cray X1 for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, M. Jahed; Shan, Hongzhang; Skinner, David

    2004-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements.

  10. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we will show the latest simulation results using the petascale supercomputer and problems from the use of these supercomputer systems.

  11. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  12. Drug Release as a function of bioactivity, incubation regime, liquid, and initial load: Release of bortezomib from calcium phosphate-containing silica/collagen xerogels.

    PubMed

    Kruppke, Benjamin; Hose, Dirk; Schnettler, Reinhard; Seckinger, Anja; Rößler, Sina; Hanke, Thomas; Heinemann, Sascha

    2018-04-01

    The ability of silica-/collagen-based composite xerogels to act as drug delivery systems was evaluated by taking into account the initial drug concentration, bioactivity of the xerogels, liquid, and incubation regime. The proteasome inhibitor bortezomib was chosen as a model drug, used for the systemic treatment of multiple myeloma. Incubation during 14 days in phosphate-buffered saline (PBS) or simulated body fluid (SBF) showed a weak initial burst and was identified to be of first order with subsequent release being independent from the initial load of 0.1 or 0.2 mg bortezomib per 60 mg monolithic sample. Faster drug release occurred during incubation in SBF compared to PBS, and during static incubation without changing the liquid, compared to dynamic incubation with daily liquid changes. Drug-loaded xerogels with hydroxyapatite as a third component exhibited enhanced bioactivity retarding drug release, explained by formation of a surface calcium phosphate layer. The fastest release of 50% of the total drug load was observed for biphasic xerogels after 7 days during dynamic incubation in SBF. As a result, the presented concept is suitable for the intended combination of the advantageous bone substitution properties of xerogels and local application of drugs exemplified by bortezomib. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 1165-1173, 2018. © 2017 Wiley Periodicals, Inc.

  13. Finite element methods on supercomputers - The scatter-problem

    NASA Technical Reports Server (NTRS)

    Loehner, R.; Morgan, K.

    1985-01-01

    Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.

  14. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  15. NSF Establishes First Four National Supercomputer Centers.

    ERIC Educational Resources Information Center

    Lepkowski, Wil

    1985-01-01

    The National Science Foundation (NSF) has awarded support for supercomputer centers at Cornell University, Princeton University, University of California (San Diego), and University of Illinois. These centers are to be the nucleus of a national academic network for use by scientists and engineers throughout the United States. (DH)

  16. Library Services in a Supercomputer Center.

    ERIC Educational Resources Information Center

    Layman, Mary

    1991-01-01

    Describes library services that are offered at the San Diego Supercomputer Center (SDSC), which is located at the University of California at San Diego. Topics discussed include the user population; online searching; microcomputer use; electronic networks; current awareness programs; library catalogs; and the slide collection. A sidebar outlines…

  17. Probing the cosmic causes of errors in supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Cosmic rays from outer space are causing errors in supercomputers. The neutrons that pass through the CPU may be causing binary data to flip leading to incorrect calculations. Los Alamos National Laboratory has developed detectors to determine how much data is being corrupted by these cosmic particles.

  18. The Sky's the Limit When Super Students Meet Supercomputers.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  19. NSF Says It Will Support Supercomputer Centers in California and Illinois.

    ERIC Educational Resources Information Center

    Strosnider, Kim; Young, Jeffrey R.

    1997-01-01

    The National Science Foundation will increase support for supercomputer centers at the University of California, San Diego and the University of Illinois, Urbana-Champaign, while leaving unclear the status of the program at Cornell University (New York) and a cooperative Carnegie-Mellon University (Pennsylvania) and University of Pittsburgh…

  20. Access to Supercomputers. Higher Education Panel Report 69.

    ERIC Educational Resources Information Center

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  1. NOAA announces significant investment in next generation of supercomputers

    Science.gov Websites

    provide more timely, accurate weather forecasts. (Credit: istockphoto.com) Today, NOAA announced the next phase in the agency's efforts to increase supercomputing capacity to provide more timely, accurate turn will lead to more timely, accurate, and reliable forecasts." Ahead of this upgrade, each of

  2. Developments in the simulation of compressible inviscid and viscous flow on supercomputers

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Buning, P. G.

    1985-01-01

    In anticipation of future supercomputers, finite difference codes are rapidly being extended to simulate three-dimensional compressible flow about complex configurations. Some of these developments are reviewed. The importance of computational flow visualization and diagnostic methods to three-dimensional flow simulation is also briefly discussed.

  3. Computing and data processing

    NASA Technical Reports Server (NTRS)

    Smarr, Larry; Press, William; Arnett, David W.; Cameron, Alastair G. W.; Crutcher, Richard M.; Helfand, David J.; Horowitz, Paul; Kleinmann, Susan G.; Linsky, Jeffrey L.; Madore, Barry F.

    1991-01-01

    The applications of computers and data processing to astronomy are discussed. Among the topics covered are the emerging national information infrastructure, workstations and supercomputers, supertelescopes, digital astronomy, astrophysics in a numerical laboratory, community software, archiving of ground-based observations, dynamical simulations of complex systems, plasma astrophysics, and the remote control of fourth dimension supercomputers.

  4. Preparation and characterization of oxybenzone-loaded gelatin microspheres for enhancement of sunscreening efficacy.

    PubMed

    Patel, M; Jain, Sunil K; Yadav, Awesh K; Gogna, D; Agrawal, G P

    2006-01-01

    The objective of our present study was to prepare and evaluate gelatin microspheres of oxybenzone to enhance its sunscreening efficacy. The gelatin microspheres of oxybenzone were prepared by emulsion method. Process parameters were analyzed to optimize the formulation. The in vitro drug release study was performed in pH 7.4 using cellulose acetate membrane. Microspheres prepared using oxybenzone:gelatin ratio of 1:6 showed slowest drug release and those prepared with oxybenzone:gelatin ratio of 1:2 showed fastest drug release. The gelatin microspheres of oxybenzone were incorporated in aloe vera gel. Sun exposure method using sodium nitroprusside solution was used for in vitro sunscreen efficacy testing. The formulation C5 containing oxybenzone-bearing gelatin microspheres in aloe vera gel showed best sunscreen efficacy. The formulations were evaluated for skin irritation test in human volunteers, sun protection factor, and minimum erythema dose in albino rats. These studies revealed that the incorporation of sunscreening agent-loaded microspheres into aloe vera gel greatly increased the efficacy of sunscreen formulation more than four times.

  5. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  6. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  7. 2008 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    2009-12-07

    The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P systemmore » at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.« less

  8. Supercomputer use in orthopaedic biomechanics research: focus on functional adaptation of bone.

    PubMed

    Hart, R T; Thongpreda, N; Van Buskirk, W C

    1988-01-01

    The authors describe two biomechanical analyses carried out using numerical methods. One is an analysis of the stress and strain in a human mandible, and the other analysis involves modeling the adaptive response of a sheep bone to mechanical loading. The computing environment required for the two types of analyses is discussed. It is shown that a simple stress analysis of a geometrically complex mandible can be accomplished using a minicomputer. However, more sophisticated analyses of the same model with dynamic loading or nonlinear materials would require supercomputer capabilities. A supercomputer is also required for modeling the adaptive response of living bone, even when simple geometric and material models are use.

  9. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  10. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  11. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  12. NAS Technical Summaries, March 1993 - February 1994

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  13. NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  14. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  15. The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    1997-01-01

    In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…

  16. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  17. An efficient MPI/OpenMP parallelization of the Hartree–Fock–Roothaan method for the first generation of Intel® Xeon Phi™ processor architecture

    DOE PAGES

    Mironov, Vladimir; Moskovsky, Alexander; D’Mello, Michael; ...

    2017-10-04

    The Hartree-Fock (HF) method in the quantum chemistry package GAMESS represents one of the most irregular algorithms in computation today. Major steps in the calculation are the irregular computation of electron repulsion integrals (ERIs) and the building of the Fock matrix. These are the central components of the main Self Consistent Field (SCF) loop, the key hotspot in Electronic Structure (ES) codes. By threading the MPI ranks in the official release of the GAMESS code, we not only speed up the main SCF loop (4x to 6x for large systems), but also achieve a significant (>2x) reduction in the overallmore » memory footprint. These improvements are a direct consequence of memory access optimizations within the MPI ranks. We benchmark our implementation against the official release of the GAMESS code on the Intel R Xeon PhiTM supercomputer. Here, scaling numbers are reported on up to 7,680 cores on Intel Xeon Phi coprocessors.« less

  18. Open release of the DCA++ project

    NASA Astrophysics Data System (ADS)

    Haehner, Urs; Solca, Raffaele; Staar, Peter; Alvarez, Gonzalo; Maier, Thomas; Summers, Michael; Schulthess, Thomas

    We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum many-body problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA+ extension with a continuous self-energy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into high-Tc superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPU-GPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  19. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  20. Extracting the Textual and Temporal Structure of Supercomputing Logs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, S; Singh, I; Chandra, A

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less

  1. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  2. [Technological and pharmacotherapeutic properties of selected drugs with modified release of diclofenac sodium].

    PubMed

    Kołodziejczyk, Michał Krzysztof; Kołodziejska, Justyna; Zgoda, Marian Mikołaj

    2012-01-01

    Diclofenac and its sodium salt is one of the best-known and popular therapeutic agents from the group of NSAIDs used in medicine in many various pharmaceutical forms. Therapeutic products containing diclofenac sodium salt in doses of 100 mg and 75 mg with a qualitatively and quantitatively diversified share of excipients and a variable dosage form of the drug (solid capsules, tablets with modified release) were subjected to technological and pharmaceutical analysis. The effect of solid formulation components of polymer character making the core and the coating of the pharmaceutical form of therapeutic products on the disintegration time and pharmaceutical availability in pharmacopoeial receptor fluids was estimated. Market therapeutic products with diclofenac sodium in doses of 75 mg and 100 mg, technological analysis of the drug dosage form was conducted, disintegration time of solid oral dosage forms of the drug with diclofenac sodium salt was examined and research on pharmaceutical availability of diclofenac sodium salt from tested therapeutic products was conducted using the acid phase and the buffer phase according to the FP standards for delayed release enteral dosage forms. The experimental data was supplemented with the statistical analysis. There are three formulations in the form of solid capsules and one formulation in the form of a coated tablet. All therapeutic products bear features of a dosage form of modified release of diclofenac sodium salt, frequently of a delayed release formula in the duodenum or the small intestine with regard to the limitation of typical undesirable effects after taking NSAIDs. Considerable diversity between solid capsules and the tablet with modified release during disintegration or hydration and swelling has been observed. In the environment of a receptor fluid--purified water (pH = 7) the capsule Dicloberl retard disintegrates at the fastest rate in 5,49 minutes, and then in the order: DicloDuo 75 mg--8,13 minutes and Olfen 100 SR--11,27 minutes. The hydration degree of gelatin walls of capsules depends on the pH of the receptor fluid. The availability of diclofenac sodium salt in given receptor fluids confirms the fact of significant connection of clinical effectiveness of the tested pharmaceutical forms with the activity of hydrogen ions (pH) of the environment in which there are therapeutic products, and excipients used for making the pharmaceutical phase. Tested therapeutic products with diclofenac sodium salt are differentiated by the type of a dosage form. Dicloberl retard contains the minimally indispensable number of simple, commonly used excipients. The research on the disintegration time may only be related to the products Dicloberl retard, Olfen 100 SR and DicloDuo 75 mg treating it as the time of deformation and disintegration of a capsule. In all three types of receptor fluids, the capsule Dicloberl retard has the fastest disintegration rate. The "acid phase" demonstrated stability of the products with a slight dissolution of diclofenac sodium salt on the level 1,3-4,18% of the Q release coefficient. In the environment of artificial intestinal juice, Dicloberl retard is more effective releasing larger amounts of diclofenac sodium salt during 4 hours of exposition (differences from 10% to 14% of the Q release coefficient).

  3. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  4. Performance differences between sexes in 50-mile to 3,100-mile ultramarathons.

    PubMed

    Zingg, Matthias A; Knechtle, Beat; Rosemann, Thomas; Rüst, Christoph A

    2015-01-01

    Anecdotal reports have assumed that women would be able to outrun men in long-distance running. The aim of this study was to test this assumption by investigating the changes in performance difference between sexes in the best ultramarathoners in 50-mile, 100-mile, 200-mile, 1,000-mile, and 3,100-mile events held worldwide between 1971 and 2012. The sex differences in running speed for the fastest runners ever were analyzed using one-way analysis of variance with subsequent Tukey-Kramer posthoc analysis. Changes in sex difference in running speed of the annual fastest were analyzed using linear and nonlinear regression analyses, correlation analyses, and mixed-effects regression analyses. The fastest men ever were faster than the fastest women ever in 50-mile (17.5%), 100-mile (17.4%), 200-mile (9.7%), 1,000-mile (20.2%), and 3,100-mile (18.6%) events. For the ten fastest finishers ever, men were faster than women in 50-mile (17.1%±1.9%), 100-mile (19.2%±1.5%), and 1,000-mile (16.7%±1.6%) events. No correlation existed between sex difference and running speed for the fastest ever (r (2)=0.0039, P=0.91) and the ten fastest ever (r (2)=0.15, P=0.74) for all distances. For the annual fastest, the sex difference in running speed decreased linearly in 50-mile events from 14.6% to 8.9%, remained unchanged in 100-mile (18.0%±8.4%) and 1,000-mile (13.7%±9.1%) events, and increased in 3,100-mile events from 12.5% to 16.9%. For the annual ten fastest runners, the performance difference between sexes decreased linearly in 50-mile events from 31.6%±3.6% to 8.9%±1.8% and in 100-mile events from 26.0%±4.4% to 24.7%±0.9%. To summarize, the fastest men were ~17%-20% faster than the fastest women for all distances from 50 miles to 3,100 miles. The linear decrease in sex difference for 50-mile and 100-mile events may suggest that women are reducing the sex gap for these distances.

  5. Performance differences between sexes in 50-mile to 3,100-mile ultramarathons

    PubMed Central

    Zingg, Matthias A; Knechtle, Beat; Rosemann, Thomas; Rüst, Christoph A

    2015-01-01

    Anecdotal reports have assumed that women would be able to outrun men in long-distance running. The aim of this study was to test this assumption by investigating the changes in performance difference between sexes in the best ultramarathoners in 50-mile, 100-mile, 200-mile, 1,000-mile, and 3,100-mile events held worldwide between 1971 and 2012. The sex differences in running speed for the fastest runners ever were analyzed using one-way analysis of variance with subsequent Tukey–Kramer posthoc analysis. Changes in sex difference in running speed of the annual fastest were analyzed using linear and nonlinear regression analyses, correlation analyses, and mixed-effects regression analyses. The fastest men ever were faster than the fastest women ever in 50-mile (17.5%), 100-mile (17.4%), 200-mile (9.7%), 1,000-mile (20.2%), and 3,100-mile (18.6%) events. For the ten fastest finishers ever, men were faster than women in 50-mile (17.1%±1.9%), 100-mile (19.2%±1.5%), and 1,000-mile (16.7%±1.6%) events. No correlation existed between sex difference and running speed for the fastest ever (r2=0.0039, P=0.91) and the ten fastest ever (r2=0.15, P=0.74) for all distances. For the annual fastest, the sex difference in running speed decreased linearly in 50-mile events from 14.6% to 8.9%, remained unchanged in 100-mile (18.0%±8.4%) and 1,000-mile (13.7%±9.1%) events, and increased in 3,100-mile events from 12.5% to 16.9%. For the annual ten fastest runners, the performance difference between sexes decreased linearly in 50-mile events from 31.6%±3.6% to 8.9%±1.8% and in 100-mile events from 26.0%±4.4% to 24.7%±0.9%. To summarize, the fastest men were ~17%–20% faster than the fastest women for all distances from 50 miles to 3,100 miles. The linear decrease in sex difference for 50-mile and 100-mile events may suggest that women are reducing the sex gap for these distances. PMID:25653567

  6. A Performance Evaluation of the Cray X1 for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, M. Jahed; Shan, Hongzhang; Skinner, David

    2003-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers because of their generality, scalability, and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently-released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements.

  7. Influence of Polymer Type on the Physical Properties and Release Profile of Papaverine Hydrochloride From Hard Gelatin Capsules.

    PubMed

    Polski, Andrzej; Iwaniak, Karol; Kasperek, Regina; Modrzewska, Joanna; Sobótka-Polska, Karolina; Sławińska, Karolina; Poleszak, Ewa

    2015-01-01

    The capsule is one of the most important solid dosage forms in the pharmaceutical industry. It is easier and faster to produce than a tablet, because it requires fewer excipients. Generally, capsules are easy to swallow and mask any unpleasant taste of the substances used while their release profiles can be easily modified. Papaverine hydrochloride was used as a model substance to show different release profiles using different excipients. The main aim of the study was to analyze the impact of using different polymers on the release profile of papaverine hydrochloride from hard gelatin capsules. Six series of hard gelatin capsules containing papaverine hydrochloride as a model drug and different excipients were made. Then, the angle of repose, flow rate, mass flow rate and volume flow rate of the powders used for capsule production were analyzed. The uniform weight and disintegration time of the capsules were studied. The dissolution study was performed in a basket apparatus, while the amount of papaverine hydrochloride released was determined spectrophotometrically at 251 nm. Only one formula of powder had satisfactory flow properties, while all formulas had good Hausner ratios. The best properties were from powder containing polyvinylpyrrolidone 10k. The disintegration time of capsules varied from 1:30 min to 2:00 min. As required by Polish Pharmacopoeia X, 80% of the active substance in all cases was released within 15 minutes. The capsules with polyvinylpyrrolidone 10k were characterized by the longest release. On the other hand, capsules containing microcrystalline cellulose had the fastest release profile. Using 10% of different polymers, without changing the other excipients, had a significant impact on the physical properties of the powders and papaverine hydrochloride release profile. The two most preferred capsule formulations contained either polyvinylpyrrolidone 10k or microcrystalline cellulose.

  8. The impact of the U.S. supercomputing initiative will be global

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  9. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  10. Predicting Hurricanes with Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  11. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  12. Advances in petascale kinetic plasma simulation with VPIC and Roadrunner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Kevin J; Albright, Brian J; Yin, Lin

    2009-01-01

    VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration andmore » modeling reconnection in magnetic confinement fusion experiments.« less

  13. Supercomputing Sheds Light on the Dark Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Heitmann, Katrin

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less

  14. Surprise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curran, L.

    1988-03-03

    Interest has been building in recent months over the imminent arrival of a new class of supercomputer, called the ''supercomputer on a desk'' or the single-user model. Most observers expected the first such product to come from either of two startups, Ardent Computer Corp. or Stellar Computer Inc. But a surprise entry has shown up. Apollo Computer Inc. is launching a new work station this week that racks up an impressive list of industry first as it puts supercomputer power at the disposal of a single user. The new series 10000 from the Chelmsford, Mass., a company is built aroundmore » a reduced-instruction-set architecture that the company calls Prism, for parallel reduced-instruction-set multiprocessor. This article describes the 10000 and Prism.« less

  15. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  16. The effect of superdisintegrants on the properties and dissolution profiles of liquisolid tablets containing rosuvastatin.

    PubMed

    Vraníková, Barbora; Gajdziok, Jan; Doležel, Petr

    2017-03-01

    The preparation of liquisolid systems (LSS) represents a promising method for enhancing a dissolution rate and bioavailability of poorly soluble drugs. The release of the drug from LSS tablets is affected by many factors, including the disintegration time. The evaluation of differences among LSS containing varying amounts and types of commercially used superdisintegrants (Kollidon® CL-F, Vivasol® and Explotab®). LSS were prepared by spraying rosuvastatin solution onto Neusilin® US2 and further processing into tablets. Varying amounts of superdisintegrants were used and the differences among LSS were evaluated. The multiple scatter plot method was used to visualize the relationships within the obtained data. All disintegrants do not showed negative effect on the flow properties of powder blends. The type and concentration of superdisintegrant had an impact on the disintegration time and dissolution profiles of tablets. Tablets with Explotab® showed the longest disintegration time and the smallest amount of released drug. Fastest disintegration and dissolution rate were observed in tablets containing Kollidon® CL-F (≥2.5% w/w). Also tablets with Vivasol® (2.5-4.0% w/w) showed fast disintegration and complete drug release. Kollidon® CL-F and Vivasol® in concentration ≥2.5% are suitable superdisintegrants for LSS with enhanced release of drug.

  17. 33 CFR 100.110 - World's Fastest Lobster Boat Race, Jonesport, ME.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false World's Fastest Lobster Boat Race, Jonesport, ME. 100.110 Section 100.110 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY REGATTAS AND MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.110 World's Fastest Lobster...

  18. Effective genetic modification and differentiation of hMSCs upon controlled release of rAAV vectors using alginate/poloxamer composite systems.

    PubMed

    Díaz-Rodríguez, P; Rey-Rico, A; Madry, H; Landin, M; Cucchiarini, M

    2015-12-30

    Viral vectors are common tools in gene therapy to deliver foreign therapeutic sequences in a specific target population via their natural cellular entry mechanisms. Incorporating such vectors in implantable systems may provide strong alternatives to conventional gene transfer procedures. The goal of the present study was to generate different hydrogel structures based on alginate (AlgPH155) and poloxamer PF127 as new systems to encapsulate and release recombinant adeno-associated viral (rAAV) vectors. Inclusion of rAAV in such polymeric capsules revealed an influence of the hydrogel composition and crosslinking temperature upon the vector release profiles, with alginate (AlgPH155) structures showing the fastest release profiles early on while over time vector release was more effective from AlgPH155+PF127 [H] capsules crosslinked at a high temperature (50°C). Systems prepared at room temperature (AlgPH155+PF127 [C]) allowed instead to achieve a more controlled release profile. When tested for their ability to target human mesenchymal stem cells, the different systems led to high transduction efficiencies over time and to gene expression levels in the range of those achieved upon direct vector application, especially when using AlgPH155+PF127 [H]. No detrimental effects were reported on either cell viability or on the potential for chondrogenic differentiation. Inclusion of PF127 in the capsules was also capable of delaying undesirable hypertrophic cell differentiation. These findings are of promising value for the further development of viral vector controlled release strategies. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Ice Storm Supercomputer

    ScienceCinema

    None

    2018-05-01

    A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed "Ice Storm" this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  20. Open Skies Project Computational Fluid Dynamic Analysis

    DTIC Science & Technology

    1994-03-01

    109 -. -_ _ 9 . CONCLUSIONSI1 f 10. LIST OF REFERENCES _________ ___________112 APPENDIX A: Transition Prediction __________________116 B...Behind the Open Skies Plate 20 8. VSAERO Results on the Alternate Fairing 21 9 . Centerline Cp Comparisons 22 10. VSAERO Wing Effects Study Centerline C...problems. The assistance Mrs. Mary Ann Mages, at Kirtland Supercomputer Center ( PL /SCPR) gave by setting a precedent for supercomputer account

  1. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less

  2. Effect of food characteristics, storage conditions, and electron beam irradiation on active agent release from polyamide-coated LDPE films.

    PubMed

    Han, J; Castell-Perez, M E; Moreira, R G

    2008-03-01

    We investigated the effect of electron beam irradiation, storage conditions, and model food pH on the release characteristics of trans-cinnamaldehyde incorporated into polyamide-coated low-density polyethylene (LDPE) films. Active agent release rate on irradiated films (up to 20.0 kGy) decreased by 69% compared with the nonirradiated controls, from 0.252 to 0.086 microg/mL/h. Storage temperature (4, 21, and 35 degrees C) and pH (4, 7, and 10) of the food simulant solutions (10% aqueous ethanol) affected the release rate of trans-cinnamaldehyde. As expected, antimicrobial release rate decreased to 0.013 microg/mL/h at the refrigerated temperature (4 degrees C) compared to the higher temperatures (0.029 and 0.035 microg/mL/h at 21 and 35 degrees C). The fastest release rate occurred when exposed to the acidic food simulant solution (pH 4). In aqueous solution, trans-cinnamaldehyde was highly unstable to ionizing radiation, with loss in concentration from 24.50 to 1.36 microg/mL after exposure to 2.0 kGy. Fourier transform infrared (FTIR) analysis revealed that exposure to ionizing radiation up to 10.0 kGy did not affect the structural conformation of LDPE/polyamide films and the trans-cinnamaldehyde in the films, though it induced changes in the functional group of trans-cinnamaldehyde when dose increased up to 20.0 kGy. Studies with a radiation-stable compound (naphthalene) showed that ionizing radiation induced the crosslinking in polymer networks of LDPE/polyamide film and caused slow and gradual release of the compound. This study demonstrated that irradiation serves as a controlling factor for release of active compounds, with potential applications in the development of antimicrobial packaging systems.

  3. Kinetics of Germination of Individual Spores of Geobacillus stearothermophilus as Measured by Raman Spectroscopy and Differential Interference Contrast Microscopy

    PubMed Central

    Zhou, Tingting; Dong, Zhiyang; Setlow, Peter; Li, Yong-qing

    2013-01-01

    Geobacillus stearothermophilus is a gram-positive, thermophilic bacterium, spores of which are very heat resistant. Raman spectroscopy and differential interference contrast microscopy were used to monitor the kinetics of germination of individual spores of G. stearothermophilus at different temperatures, and major conclusions from this work were as follows. 1) The CaDPA level of individual G. stearothermophilus spores was similar to that of Bacillus spores. However, the Raman spectra of protein amide bands suggested there are differences in protein structure in spores of G. stearothermophilus and Bacillus species. 2) During nutrient germination of G. stearothermophilus spores, CaDPA was released beginning after a lag time (T lag) between addition of nutrient germinants and initiation of CaDPA release. CaDPA release was complete at T release, and ΔT release (T release – T lag) was 1–2 min. 3) Activation by heat or sodium nitrite was essential for efficient nutrient germination of G. stearothermophilus spores, primarily by decreasing T lag values. 4) Values of T lag and T release were heterogeneous among individual spores, but ΔT release values were relatively constant. 5) Temperature had major effects on nutrient germination of G. stearothermophilus spores, as at temperatures below 65°C, average T lag values increased significantly. 6) G. stearothermophilus spore germination with exogenous CaDPA or dodecylamine was fastest at 65°C, with longer Tlag values at lower temperatures. 7) Decoating of G. stearothermophilus spores slowed nutrient germination slightly and CaDPA germination significantly, but increased dodecylamine germination markedly. These results indicate that the dynamics and heterogeneity of the germination of individual G. stearothermophilus spores are generally similar to that of Bacillus species. PMID:24058645

  4. The fastest spreader in SIS epidemics on networks

    NASA Astrophysics Data System (ADS)

    He, Zhidong; Van Mieghem, Piet

    2018-05-01

    Identifying the fastest spreaders in epidemics on a network helps to ensure an efficient spreading. By ranking the average spreading time for different spreaders, we show that the fastest spreader may change with the effective infection rate of a SIS epidemic process, which means that the time-dependent influence of a node is usually strongly coupled to the dynamic process and the underlying network. With increasing effective infection rate, we illustrate that the fastest spreader changes from the node with the largest degree to the node with the shortest flooding time. (The flooding time is the minimum time needed to reach all other nodes if the process is reduced to a flooding process.) Furthermore, by taking the local topology around the spreader and the average flooding time into account, we propose the spreading efficiency as a metric to quantify the efficiency of a spreader and identify the fastest spreader, which is adaptive to different infection rates in general networks.

  5. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  6. Japanese project aims at supercomputer that executes 10 gflops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burskey, D.

    1984-05-03

    Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less

  7. 2009 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P.; Martin, D.; Drugan, C.

    2010-11-23

    This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less

  8. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  9. Enzymatic Conversion of CO2 to Bicarbonate in Functionalized Mesoporous Silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuehua; Chen, Baowei; Qi, Wen N.

    2012-05-01

    We report here that carbonic anhydrase (CA), the fastest enzyme that can covert carbon dioxide to bicarbonate, can be spontaneously entrapped in functionalized mesoporous silica (FMS) with super-high loading density (up to 0.5 mg of protein/mg of FMS) due to the dominant electrostatic interaction. The binding of CA to HOOC-FMS can result in the protein’s conformational change comparing to the enzyme free in solution, but can be overcome with increased protein loading density. The higher the protein loading density, the less conformational change, hence the higher enzymatic activity and the higher enzyme immobilization efficiency. The electrostatically bound CA can bemore » released by changing pH. The released enzyme still displayed the native conformational structure and the same high enzymatic activity as that prior to the enzyme entrapment. This work opens up a new approach converting carbon dioxide to biocarbonate in a biomimetic nanoconfiguration that can be integrated with the other part of biosynthesis process for the assimilation of carbon dioxide.« less

  10. Evaluation of gallic acid loaded zein sub-micron electrospun fibre mats as novel active packaging materials.

    PubMed

    Neo, Yun Ping; Swift, Simon; Ray, Sudip; Gizdavic-Nikolaidis, Marija; Jin, Jianyong; Perera, Conrad O

    2013-12-01

    The applicability of gallic acid loaded zein (Ze-GA) electrospun fibre mats towards potential active food packaging material was evaluated. The surface chemistry of the electrospun fibre mats was determined using X-ray photon spectroscopy (XPS). The electrospun fibre mats showed low water activity and whitish colour. Thermogravimetric analysis (TGA) and Attenuated Total Reflectance-Fourier Transform Infrared (ATR-FTIR) spectroscopy revealed the stability of the fibre mats over time. The Ze-GA fibre mats displayed similar rapid release profiles, with Ze-GA 20% exhibiting the fastest release rate in water as compared to the others. Gallic acid diffuses from the electrospun fibres in a Fickian diffusion manner and the data obtained exhibited a better fit to Higuchi model. L929 fibroblast cells were cultured on the electrospun fibres to demonstrate the absence of cytotoxicity. Overall, the Ze-GA fibre mats demonstrated antibacterial activity and properties consistent with those considered desirable for active packaging material in the food industry. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    NASA Astrophysics Data System (ADS)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  12. Japanese supercomputer technology.

    PubMed

    Buzbee, B L; Ewald, R H; Worlton, W J

    1982-12-17

    Under the auspices of the Ministry for International Trade and Industry the Japanese have launched a National Superspeed Computer Project intended to produce high-performance computers for scientific computation and a Fifth-Generation Computer Project intended to incorporate and exploit concepts of artificial intelligence. If these projects are successful, which appears likely, advanced economic and military research in the United States may become dependent on access to supercomputers of foreign manufacture.

  13. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    ScienceCinema

    Zgurskaya, Helen; Smith, Jeremy

    2018-06-13

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  14. Aviation Research and the Internet

    NASA Technical Reports Server (NTRS)

    Scott, Antoinette M.

    1995-01-01

    The Internet is a network of networks. It was originally funded by the Defense Advanced Research Projects Agency or DOD/DARPA and evolved in part from the connection of supercomputer sites across the United States. The National Science Foundation (NSF) made the most of their supercomputers by connecting the sites to each other. This made the supercomputers more efficient and now allows scientists, engineers and researchers to access the supercomputers from their own labs and offices. The high speed networks that connect the NSF supercomputers form the backbone of the Internet. The World Wide Web (WWW) is a menu system. It gathers Internet resources from all over the world into a series of screens that appear on your computer. The WWW is also a distributed. The distributed system stores data information on many computers (servers). These servers can go out and get data when you ask for it. Hypermedia is the base of the WWW. One can 'click' on a section and visit other hypermedia (pages). Our approach to demonstrating the importance of aviation research through the Internet began with learning how to put pages on the Internet (on-line) ourselves. We were assigned two aviation companies; Vision Micro Systems Inc. and Innovative Aerodynamic Technologies (IAT). We developed home pages for these SBIR companies. The equipment used to create the pages were the UNIX and Macintosh machines. HTML Supertext software was used to write the pages and the Sharp JX600S scanner to scan the images. As a result, with the use of the UNIX, Macintosh, Sun, PC, and AXIL machines, we were able to present our home pages to over 800,000 visitors.

  15. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  16. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.

  17. Characteristics of Operational Space Weather Forecasting: Observations and Models

    NASA Astrophysics Data System (ADS)

    Berger, Thomas; Viereck, Rodney; Singer, Howard; Onsager, Terry; Biesecker, Doug; Rutledge, Robert; Hill, Steven; Akmaev, Rashid; Milward, George; Fuller-Rowell, Tim

    2015-04-01

    In contrast to research observations, models and ground support systems, operational systems are characterized by real-time data streams and run schedules, with redundant backup systems for most elements of the system. We review the characteristics of operational space weather forecasting, concentrating on the key aspects of ground- and space-based observations that feed models of the coupled Sun-Earth system at the NOAA/Space Weather Prediction Center (SWPC). Building on the infrastructure of the National Weather Service, SWPC is working toward a fully operational system based on the GOES weather satellite system (constant real-time operation with back-up satellites), the newly launched DSCOVR satellite at L1 (constant real-time data network with AFSCN backup), and operational models of the heliosphere, magnetosphere, and ionosphere/thermosphere/mesophere systems run on the Weather and Climate Operational Super-computing System (WCOSS), one of the worlds largest and fastest operational computer systems that will be upgraded to a dual 2.5 Pflop system in 2016. We review plans for further operational space weather observing platforms being developed in the context of the Space Weather Operations Research and Mitigation (SWORM) task force in the Office of Science and Technology Policy (OSTP) at the White House. We also review the current operational model developments at SWPC, concentrating on the differences between the research codes and the modified real-time versions that must run with zero fault tolerance on the WCOSS systems. Understanding the characteristics and needs of the operational forecasting community is key to producing research into the coupled Sun-Earth system with maximal societal benefit.

  18. CFD applications: The Lockheed perspective

    NASA Technical Reports Server (NTRS)

    Miranda, Luis R.

    1987-01-01

    The Numerical Aerodynamic Simulator (NAS) epitomizes the coming of age of supercomputing and opens exciting horizons in the world of numerical simulation. An overview of supercomputing at Lockheed Corporation in the area of Computational Fluid Dynamics (CFD) is presented. This overview will focus on developments and applications of CFD as an aircraft design tool and will attempt to present an assessment, withing this context, of the state-of-the-art in CFD methodology.

  19. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  20. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  1. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, U.A.; Baumle, B.; Kohler, P.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  2. A Heterogeneous High-Performance System for Computational and Computer Science

    DTIC Science & Technology

    2016-11-15

    Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under Graduate students supported...team of research faculty from the departments of computer science and natural science at Bowie State University. The supercomputer is not only to...accelerated HPC systems. The supercomputer is also ideal for the research conducted in the Department of Natural Science, as research faculty work on

  3. LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments

    DTIC Science & Technology

    2015-11-20

    1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming

  4. Advanced Numerical Techniques of Performance Evaluation. Volume 1

    DTIC Science & Technology

    1990-06-01

    system scheduling3thread. The scheduling thread then runs any other ready thread that can be found. A thread can only sleep or switch out on itself...Polychronopoulos and D.J. Kuck. Guided Self- Scheduling : A Practical Scheduling Scheme for Parallel Supercomputers. IEEE Transactions on Computers C...Kuck 1987] C.D. Polychronopoulos and D.J. Kuck. Guided Self- Scheduling : A Practical Scheduling Scheme for Parallel Supercomputers. IEEE Trans. on Comp

  5. Tailoring controlled-release oral dosage forms by combining inkjet and flexographic printing techniques.

    PubMed

    Genina, Natalja; Fors, Daniela; Vakili, Hossein; Ihalainen, Petri; Pohjala, Leena; Ehlers, Henrik; Kassamakov, Ivan; Haeggström, Edward; Vuorela, Pia; Peltonen, Jouko; Sandler, Niklas

    2012-10-09

    We combined conventional inkjet printing technology with flexographic printing to fabricate drug delivery systems with accurate doses and tailored drug release. Riboflavin sodium phosphate (RSP) and propranolol hydrochloride (PH) were used as water-soluble model drugs. Three different paper substrates: A (uncoated woodfree paper), B (triple-coated inkjet paper) and C (double-coated sheet fed offset paper) were used as porous model carriers for drug delivery. Active pharmaceutical ingredient (API) containing solutions were printed onto 1 cm × 1 cm substrate areas using an inkjet printer. The printed APIs were coated with water insoluble polymeric films of different thickness using flexographic printing. All substrates were characterized with respect to wettability, surface roughness, air permeability, and cell toxicity. In addition, content uniformity and release profiles of the produced solid dosage forms before and after coating were studied. The substrates were nontoxic for the human cell line assayed. Substrate B was smoothest and least porous. The properties of substrates B and C were similar, whereas those of substrate A differed significantly from those of B, C. The release kinetics of both printed APIs was slowest from substrate B before and after coating with the water insoluble polymer film, following by substrate C, whereas substrate A showed the fastest release. The release rate decreased with increasing polymer coating film thickness. The printed solid dosage forms showed excellent content uniformity. So, combining the two printing technologies allowed fabricating controlled-release oral dosage forms that are challenging to produce using a single technique. The approach opens up new perspectives in the manufacture of flexible doses and tailored drug-delivery systems. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. OPserver: opacities and radiative accelerations on demand

    NASA Astrophysics Data System (ADS)

    Mendoza, C.; González, J.; Seaton, M. J.; Buerger, P.; Bellorín, A.; Meléndez, M.; Rodríguez, L. S.; Delahaye, F.; Zeippen, C. J.; Palacios, E.; Pradhan, A. K.

    2009-05-01

    We report on developments carried out within the Opacity Project (OP) to upgrade atomic database services to comply with e-infrastructure requirements. We give a detailed description of an interactive, online server for astrophysical opacities, referred to as OPserver, to be used in sophisticated stellar modelling where Rosseland mean opacities and radiative accelerations are computed at every depth point and each evolution cycle. This is crucial, for instance, in chemically peculiar stars and in the exploitation of the new asteroseismological data. OPserver, downloadable with the new OPCD_3.0 release from the Centre de Données Astronomiques de Strasbourg, France, computes mean opacities and radiative data for arbitrary chemical mixtures from the OP monochromatic opacities. It is essentially a client-server network restructuring and optimization of the suite of codes included in the earlier OPCD_2.0 release. The server can be installed locally or, alternatively, accessed remotely from the Ohio Supercomputer Center, Columbus, Ohio, USA. The client is an interactive web page or a subroutine library that can be linked to the user code. The suitability of this scheme in grid computing environments is emphasized, and its extension to other atomic database services for astrophysical purposes is discussed.

  7. BioFVM: an efficient, parallelized diffusive transport solver for 3-D biological simulations

    PubMed Central

    Ghaffarizadeh, Ahmadreza; Friedman, Samuel H.; Macklin, Paul

    2016-01-01

    Motivation: Computational models of multicellular systems require solving systems of PDEs for release, uptake, decay and diffusion of multiple substrates in 3D, particularly when incorporating the impact of drugs, growth substrates and signaling factors on cell receptors and subcellular systems biology. Results: We introduce BioFVM, a diffusive transport solver tailored to biological problems. BioFVM can simulate release and uptake of many substrates by cell and bulk sources, diffusion and decay in large 3D domains. It has been parallelized with OpenMP, allowing efficient simulations on desktop workstations or single supercomputer nodes. The code is stable even for large time steps, with linear computational cost scalings. Solutions are first-order accurate in time and second-order accurate in space. The code can be run by itself or as part of a larger simulator. Availability and implementation: BioFVM is written in C ++ with parallelization in OpenMP. It is maintained and available for download at http://BioFVM.MathCancer.org and http://BioFVM.sf.net under the Apache License (v2.0). Contact: paul.macklin@usc.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26656933

  8. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  9. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  10. A Long History of Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  11. Introducing Argonne’s Theta Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Theta, the Argonne Leadership Computing Facility’s (ALCF) new Intel-Cray supercomputer, is officially open to the research community. Theta’s massively parallel, many-core architecture puts the ALCF on the path to Aurora, the facility’s future Intel-Cray system. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

  12. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  13. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  14. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  15. Modelling sodium cobaltate by mapping onto magnetic Ising model

    NASA Astrophysics Data System (ADS)

    Gemperline, Patrick; Morris, David Jonathan Pryce

    Fast Ion conductors are a class of crystals that are frequently used as battery materials, especially in smart phones, laptops, and other portable devices. Sodium Cobalt Oxide, NaxCoO2, falls into this class of crystals, but is unique because it possesses the ability to act as a thermoelectric material and a superconductor at different concentrations of Na+. The crystal lattice is mapped onto an Ising Magnetic Spin model and a Monte-Carol Simulation is used to find the most energetically favorable configuration of spins. This spin configuration is mapped back to the crystal lattice resulting in the most stable crystal structure of Sodium Cobalt Oxide at various concentrations. Knowing the atomic structures of the crystals will aid in the research of the materials capabilities and the possible uses of the material commercially. Ohio Supercomputer Center. 1987. Ohio Supercomputer Center. Columbus OH: Ohio Supercomputer Center. and the John Hauck Foundation.

  16. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karbach, Carsten; Frings, Wolfgang

    2013-02-22

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP.more » The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.« less

  17. Will women outrun men in ultra-marathon road races from 50 km to 1,000 km?

    PubMed

    Zingg, Matthias Alexander; Karner-Rezek, Klaus; Rosemann, Thomas; Knechtle, Beat; Lepers, Romuald; Rüst, Christoph Alexander

    2014-01-01

    It has been assumed that women would be able to outrun men in ultra-marathon running. The present study investigated the sex differences in running speed in ultra-marathons held worldwide from 50 km to 1,000 km. Changes in running speeds and the sex differences in running speeds in the annual fastest finishers in 50 km, 100 km, 200 km and 1,000 km events held worldwide from 1969-2012 were analysed using linear, non-linear and multi-level regression analyses. For the annual fastest and the annual ten fastest finishers, running speeds increased non-linearly in 50 km and 100 km, but not in 200 km and 1,000 km where running speeds remained unchanged for the annual fastest. The sex differences decreased non-linearly in 50 km and 100 km, but not in 200 and 1,000 km where the sex difference remained unchanged for the annual fastest. For the fastest women and men ever, the sex difference in running speed was lowest in 100 km (5.0%) and highest in 50 km (15.4%). For the ten fastest women and men ever, the sex difference was lowest in 100 km (10.0 ± 3.0%) and highest in 200 km (27.3 ± 5.7%). For both the fastest (r(2) = 0.003, p = 0.82) and the ten fastest finishers ever (r(2) = 0.34, p = 0.41) in 50 km, 100 km, 200 km and 1,000 km, we found no correlation between sex difference in performance and running speed. To summarize, the sex differences in running speeds decreased non-linearly in 50 km and 100 km but remained unchanged in 200 km and 1,000 km, and the sex differences in running speeds showed no change with increasing length of the race distance. These findings suggest that it is very unlikely that women will ever outrun men in ultra-marathons held from 50 km to 100 km.

  18. Space Radar Image of Mammoth Mountain, California

    NASA Image and Video Library

    1999-05-01

    This false-color composite radar image of the Mammoth Mountain area in the Sierra Nevada Mountains, California, was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 67th orbit on October 3, 1994. The image is centered at 37.6 degrees north latitude and 119.0 degrees west longitude. The area is about 39 kilometers by 51 kilometers (24 miles by 31 miles). North is toward the bottom, about 45 degrees to the right. In this image, red was created using L-band (horizontally transmitted/vertically received) polarization data; green was created using C-band (horizontally transmitted/vertically received) polarization data; and blue was created using C-band (horizontally transmitted and received) polarization data. Crawley Lake appears dark at the center left of the image, just above or south of Long Valley. The Mammoth Mountain ski area is visible at the top right of the scene. The red areas correspond to forests, the dark blue areas are bare surfaces and the green areas are short vegetation, mainly brush. The purple areas at the higher elevations in the upper part of the scene are discontinuous patches of snow cover from a September 28 storm. New, very thin snow was falling before and during the second space shuttle pass. In parallel with the operational SIR-C data processing, an experimental effort is being conducted to test SAR data processing using the Jet Propulsion Laboratory's massively parallel supercomputing facility, centered around the Cray Research T3D. These experiments will assess the abilities of large supercomputers to produce high throughput Synthetic Aperture Radar processing in preparation for upcoming data-intensive SAR missions. The image released here was produced as part of this experimental effort. http://photojournal.jpl.nasa.gov/catalog/PIA01746

  19. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  20. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2018-02-07

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  1. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayan Ghosh, Jeff Hammond

    OpenSHMEM is a community effort to unifyt and standardize the SHMEM programming model. MPI (Message Passing Interface) is a well-known community standard for parallel programming using distributed memory. The most recen t release of MPI, version 3.0, was designed in part to support programming models like SHMEM.OSHMPI is an implementation of the OpenSHMEM standard using MPI-3 for the Linux operating system. It is the first implementation of SHMEM over MPI one-sided communication and has the potential to be widely adopted due to the portability and widely availability of Linux and MPI-3. OSHMPI has been tested on a variety of systemsmore » and implementations of MPI-3, includingInfiniBand clusters using MVAPICH2 and SGI shared-memory supercomputers using MPICH. Current support is limited to Linux but may be extended to Apple OSX if there is sufficient interest. The code is opensource via https://github.com/jeffhammond/oshmpi« less

  3. Transitioning NWChem to the Next Generation of Manycore Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Apra, Edoardo; Kowalski, Karol

    The NorthWest Chemistry (NWChem) modeling software is a popular molecular chemistry simulation software that was designed from the start to work on massively parallel processing supercomputers[6, 28, 49]. It contains an umbrella of modules that today includes Self Consistent Field (SCF), second order Mller-Plesset perturbation theory (MP2), Coupled Cluster, multi-conguration selfconsistent eld (MCSCF), selected conguration interaction (CI), tensor contraction engine (TCE) many body methods, density functional theory (DFT), time-dependent density functional theory (TDDFT), real time time-dependent density functional theory, pseudopotential plane-wave density functional theory (PSPW), band structure (BAND), ab initio molecular dynamics, Car-Parrinello molecular dynamics, classical molecular dynamics (MD), QM/MM,more » AIMD/MM, GIAO NMR, COSMO, COSMO-SMD, and RISM solvation models, free energy simulations, reaction path optimization, parallel in time, among other capabilities[ 22]. Moreover new capabilities continue to be added with each new release.« less

  4. DOE unveils climate model in advance of global test

    NASA Astrophysics Data System (ADS)

    Popkin, Gabriel

    2018-05-01

    The world's growing collection of climate models has a high-profile new entry. Last week, after nearly 4 years of work, the U.S. Department of Energy (DOE) released computer code and initial results from an ambitious effort to simulate the Earth system. The new model is tailored to run on future supercomputers and designed to forecast not just how climate will change, but also how those changes might stress energy infrastructure. Results from an upcoming comparison of global models may show how well the new entrant works. But so far it is getting a mixed reception, with some questioning the need for another model and others saying the $80 million effort has yet to improve predictions of the future climate. Even the project's chief scientist, Ruby Leung of the Pacific Northwest National Laboratory in Richland, Washington, acknowledges that the model is not yet a leader.

  5. IonGAP: integrative bacterial genome analysis for Ion Torrent sequence data.

    PubMed

    Baez-Ortega, Adrian; Lorenzo-Diaz, Fabian; Hernandez, Mariano; Gonzalez-Vila, Carlos Ignacio; Roda-Garcia, Jose Luis; Colebrook, Marcos; Flores, Carlos

    2015-09-01

    We introduce IonGAP, a publicly available Web platform designed for the analysis of whole bacterial genomes using Ion Torrent sequence data. Besides assembly, it integrates a variety of comparative genomics, annotation and bacterial classification routines, based on the widely used FASTQ, BAM and SRA file formats. Benchmarking with different datasets evidenced that IonGAP is a fast, powerful and simple-to-use bioinformatics tool. By releasing this platform, we aim to translate low-cost bacterial genome analysis for microbiological prevention and control in healthcare, agroalimentary and pharmaceutical industry applications. IonGAP is hosted by the ITER's Teide-HPC supercomputer and is freely available on the Web for non-commercial use at http://iongap.hpc.iter.es. mcolesan@ull.edu.es or cflores@ull.edu.es Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol.

    PubMed

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-06-15

    A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  7. The whistle and the rattle: the design of sound producing muscles.

    PubMed Central

    Rome, L C; Syme, D A; Hollingworth, S; Lindstedt, S L; Baylor, S M

    1996-01-01

    Vertebrate sound producing muscles often operate at frequencies exceeding 100 Hz, making them the fastest vertebrate muscles. Like other vertebrate muscle, these sonic muscles are "synchronous," necessitating that calcium be released and resequestered by the sarcoplasmic reticulum during each contraction cycle. Thus to operate at such high frequencies, vertebrate sonic muscles require extreme adaptations. We have found that to generate the "boatwhistle" mating call (approximately 200 Hz), the swimbladder muscle fibers of toadfish have evolved (i) a large and very fast calcium transient, (ii) a fast crossbridge detachment rate, and (iii) probably a fast kinetic off-rate of Ca2+ from troponin. The fibers of the shaker muscle of rattlesnakes have independently evolved similar traits, permitting tail rattling at approximately 90 Hz. PMID:8755609

  8. Enalapril maleate orally disintegrating tablets: tableting and in vivo evaluation in hypertensive rats.

    PubMed

    Tawfeek, Hesham M; Faisal, Waleed; Soliman, Ghareb M

    2018-06-01

    The aim of this study was to develop orally disintegrating tablets (ODTs) for enalapril maleate (EnM) to facilitate its administration to the elderly or other patients having dysphagia. Compatibility between EnM and various excipients was studied using differential scanning calorimetry. ODTs of EnM were prepared by direct compression of EnM mixtures with various superdisintegrants. The tablets were evaluated for physical properties including drug content, hardness, friability, disintegration time, wetting time, and drug release. The antihypertensive effect of the optimum EnM ODTs was evaluated in vivo in hypertensive rats and compared with commercial EnM formulation. EnM ODTs had satisfactory results in terms of drug content and friability. Tablet wetting and disintegration were fast and dependent on the used superdisintegrant where croscarmellose showed the fastest wetting and disintegration time of ∼7 s. EnM release from the tablets was rapid where complete release was obtained in 10-15 min. Selected EnM ODTs rapidly and efficiently reduced the rat's blood pressure to its normal value within 1 h, compared with 4 h for EnM commercial formulation. These results confirm that EnM ODTs could find application in the management of hypertension in the elderly or other patients having dysphagia.

  9. En Route Air Traffic Control Input Devices for the Next Generation

    NASA Technical Reports Server (NTRS)

    Mainini, Matthew J.

    2010-01-01

    The purpose of this study was to investigate the usefulness of different input device configurations when trial planning new routes for aircraft in an advanced simulation of the en route workstation. The task of trial planning is one of the futuristic tools that is performed by the graphical manipulation of an aircraft's trajectory to reroute the aircraft without voice communication. In this study with two input devices, the FAA's current trackball and a basic optical computer mouse were evaluated with "pick" button in a click-and-hold state and a click-and-release state while the participant dragged the trial plan line. The trial plan was used for three different conflict types: Aircraft Conflicts, Weather Conflicts, and Aircraft + Weather Conflicts. Speed and accuracy were the primary dependent variables. Results indicate that the mouse conditions were significantly faster than the trackball conditions overall with no significant loss of accuracy. Several performance ratings and preference ratings were analyzed from post-run and post-simulation questionnaires. The release conditions were significantly more useful and likable than the hold conditions. The results suggest that the mouse in the release button state was the fastest and most well liked device configuration for trial planning in the en route workstation. Keywords-input devices, en route, controller, workstation, mouse, trackball, NextGen

  10. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.

  11. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  12. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  13. A Long History of Supercomputing

    ScienceCinema

    Grider, Gary

    2018-06-13

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  14. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    DTIC Science & Technology

    2014-09-01

    simulation time frame from 30 days to one year. This was enabled by porting the simulation to the Pleiades supercomputer at NASA Ames Research Center, a...including the motivation for changes to our past approach. We then present the software implementation (3) on the NASA Ames Pleiades supercomputer...significantly updated since last year’s paper [25]. The main incentive for that was the shift to a highly parallel approach in order to utilize the Pleiades

  15. Parallel-Vector Algorithm For Rapid Structural Anlysis

    NASA Technical Reports Server (NTRS)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  16. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  17. Science and Technology Review June 2000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Pruneda, J.H.

    2000-06-01

    This issue contains the following articles: (1) ''Accelerating on the ASCI Challenge''. (2) ''New Day Daws in Supercomputing'' When the ASCI White supercomputer comes online this summer, DOE's Stockpile Stewardship Program will make another significant advanced toward helping to ensure the safety, reliability, and performance of the nation's nuclear weapons. (3) ''Uncovering the Secrets of Actinides'' Researchers are obtaining fundamental information about the actinides, a group of elements with a key role in nuclear weapons and fuels. (4) ''A Predictable Structure for Aerogels''. (5) ''Tibet--Where Continents Collide''.

  18. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  19. PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.

    2017-12-01

    Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.

  20. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  1. AIC Computations Using Navier-Stokes Equations on Single Image Supercomputers For Design Optimization

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru

    2004-01-01

    A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.

  2. Discover Supercomputer 5

    NASA Image and Video Library

    2017-12-08

    Two rows of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS) contain more than 4,000 computer processors. Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  3. Discover Supercomputer 4

    NASA Image and Video Library

    2017-12-08

    This close-up view highlights one row—approximately 2,000 computer processors—of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS). Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  4. Rapid growing clay coatings to reduce the fire threat of furniture.

    PubMed

    Kim, Yeon Seok; Li, Yu-Chin; Pitts, William M; Werrel, Martin; Davis, Rick D

    2014-02-12

    Layer-by-layer (LbL) assembly coatings reduce the flammability of textiles and polyurethane foam but require extensive repetitive processing steps to produce the desired coating thickness and nanoparticle fire retardant content that translates into a fire retardant coating. Reported here is a new hybrid bi-layer (BL) approach to fabricate fire retardant coatings on polyurethane foam. Utilizing hydrogen bonding and electrostatic attraction along with the pH adjustment, a fast growing coating with significant fire retardant clay content was achieved. This hybrid BL coating exhibits significant fire performance improvement in both bench scale and real scale tests. Cone calorimetry bench scale tests show a 42% and 71% reduction in peak and average heat release rates, respectively. Real scale furniture mockups constructed using the hybrid LbL coating reduced the peak and average heat release rates by 53% and 63%, respectively. This is the first time that the fire safety in a real scale test has been reported for any LbL technology. This hybrid LbL coating is the fastest approach to develop an effective fire retardant coating for polyurethane foam.

  5. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  6. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  7. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Astrophysics Data System (ADS)

    Landgrebe, Anton J.

    1987-03-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  8. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Technical Reports Server (NTRS)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  9. Antenna pattern control using impedance surfaces

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Liu, Kefeng

    1992-01-01

    During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.

  10. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  11. Data communication requirements for the advanced NAS network

    NASA Technical Reports Server (NTRS)

    Levin, Eugene; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.

  12. Formulation and characterization of liquid crystal systems containing azelaic acid for topical delivery.

    PubMed

    Aytekin, Merve; Gursoy, R Neslihan; Ide, Semra; Soylu, Elif H; Hekimoglu, Sueda

    2013-02-01

    The aim of this study is to prepare and characterize azelaic acid (AzA) containing liquid crystal (LC) drug delivery systems for topical use. Two ternary phase diagrams, containing liquid paraffin as the oil component and a mixture of two nonionic surfactants (Brij 721P and Brij 72), were constructed. Formulations chosen from the phase diagrams were characterized by polarized light microscopy, rheological analyses, differential scanning calorimetry (DSC), and small angle x-ray scattering spectroscopy. Polarized light microscopy proved that except the oil/water emulsion (O/W E), other formulations showed lamellar LC structure. In vitro release studies indicated that the fastest release was achieved by the Lamellar LC (LLC) and O/W E systems, whereas slower release was obtained from the emulsion containing lamellar LC (E-LLC) and distorted lamellar LC (D-LLC) systems. Results of rheological measurements both supported the results of in vitro release studies and showed that the emulsion containing the LC (E-LLC) system had the most stable structure. The formulations and their effect on stratum corneum (SC) were evaluated by DSC studies. The lamellar LC (LLC), emulsion containing lamellar liquid crystal (E-LLC), and O/W E formulations had an effect on both lipid and protein components of SC, whereas distorted lamellar liquid crystal (D-LLC) system had an effect on only the lipid components of SC. LLC systems could be considered promising for the topical delivery of AzA.

  13. The Practical Obstacles of Data Transfer: Why researchers still love scp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T

    The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less

  14. Development of a High-Resolution Climate Model for Future Climate Change Projection on the Earth Simulator

    NASA Astrophysics Data System (ADS)

    Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.

    2002-12-01

    The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).

  15. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential ofmore » PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.« less

  16. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  17. Runners in their forties dominate ultra-marathons from 50 to 3,100 miles

    PubMed Central

    Zingg, Matthias Alexander; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald; Knechtle, Beat

    2014-01-01

    OBJECTIVES: This study investigated performance trends and the age of peak running speed in ultra-marathons from 50 to 3,100 miles. METHODS: The running speed and age of the fastest competitors in 50-, 100-, 200-, 1,000- and 3,100-mile events held worldwide from 1971 to 2012 were analyzed using single- and multi-level regression analyses. RESULTS: The number of events and competitors increased exponentially in 50- and 100-mile events. For the annual fastest runners, women improved in 50-mile events, but not men. In 100-mile events, both women and men improved their performance. In 1,000-mile events, men became slower. For the annual top ten runners, women improved in 50- and 100-mile events, whereas the performance of men remained unchanged in 50- and 3,100-mile events but improved in 100-mile events. The age of the annual fastest runners was approximately 35 years for both women and men in 50-mile events and approximately 35 years for women in 100-mile events. For men, the age of the annual fastest runners in 100-mile events was higher at 38 years. For the annual fastest runners of 1,000-mile events, the women were approximately 43 years of age, whereas for men, the age increased to 48 years of age. For the annual fastest runners of 3,100-mile events, the age in women decreased to 35 years and was approximately 39 years in men. CONCLUSION: The running speed of the fastest competitors increased for both women and men in 100-mile events but only for women in 50-mile events. The age of peak running speed increased in men with increasing race distance to approximately 45 years in 1,000-mile events, whereas it decreased to approximately 39 years in 3,100-mile events. In women, the upper age of peak running speed increased to approximately 51 years in 3,100-mile events. PMID:24626948

  18. Leadership Class Configuration Interaction Code - Status and Opportunities

    NASA Astrophysics Data System (ADS)

    Vary, James

    2011-10-01

    With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).

  19. Comparison of Origin 2000 and Origin 3000 Using NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Turney, Raymond D.

    2001-01-01

    This report describes results of benchmark tests on the Origin 3000 system currently being installed at the NASA Ames National Advanced Supercomputing facility. This machine will ultimately contain 1024 R14K processors. The first part of the system, installed in November, 2000 and named mendel, is an Origin 3000 with 128 R12K processors. For comparison purposes, the tests were also run on lomax, an Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to determine system performance and measure the impact of changes on the machine as it evolves. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versiqns used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN 3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  20. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One major concern is that the hot air ejected to the middle isle might recirculate back into the cool rack side and cause thermal short-cycling. The simulations analyzed and addressed the following important elements of the computer room: 1) High-temperature build-up in certain regions of the room; 2) Areas of low air circulation in the room; 3) Potential short-cycling of the computer rack cooling system; 4) Effectiveness of the perforated cooling floor tiles; 5) Effect of changes in various aspects of the cooling units. Detailed flow visualization is performed to show temperature distribution, air-flow streamlines and velocities in the computer room.

  1. The series-elastic shock absorber: tendons attenuate muscle power during eccentric actions.

    PubMed

    Roberts, Thomas J; Azizi, Emanuel

    2010-08-01

    Elastic tendons can act as muscle power amplifiers or energy-conserving springs during locomotion. We used an in situ muscle-tendon preparation to examine the mechanical function of tendons during lengthening contractions, when muscles absorb energy. Force, length, and power were measured in the lateral gastrocnemius muscle of wild turkeys. Sonomicrometry was used to measure muscle fascicle length independently from muscle-tendon unit (MTU) length, as measured by a muscle lever system (servomotor). A series of ramp stretches of varying velocities was applied to the MTU in fully activated muscles. Fascicle length changes were decoupled from length changes imposed on the MTU by the servomotor. Under most conditions, muscle fascicles shortened on average, while the MTU lengthened. Energy input to the MTU during the fastest lengthenings was -54.4 J/kg, while estimated work input to the muscle fascicles during this period was only -11.24 J/kg. This discrepancy indicates that energy was first absorbed by elastic elements, then released to do work on muscle fascicles after the lengthening phase of the contraction. The temporary storage of energy by elastic elements also resulted in a significant attenuation of power input to the muscle fascicles. At the fastest lengthening rates, peak instantaneous power input to the MTU reached -2,143.9 W/kg, while peak power input to the fascicles was only -557.6 W/kg. These results demonstrate that tendons may act as mechanical buffers by limiting peak muscle forces, lengthening rates, and power inputs during energy-absorbing contractions.

  2. Repeated Evolution of Power-Amplified Predatory Strikes in Trap-Jaw Spiders.

    PubMed

    Wood, Hannah M; Parkinson, Dilworth Y; Griswold, Charles E; Gillespie, Rosemary G; Elias, Damian O

    2016-04-25

    Small animals possess intriguing morphological and behavioral traits that allow them to capture prey, including innovative structural mechanisms that produce ballistic movements by amplifying power [1-6]. Power amplification occurs when an organism produces a relatively high power output by releasing slowly stored energy almost instantaneously, resulting in movements that surpass the maximal power output of muscles [7]. For example, trap-jaw, power-amplified mechanisms have been described for several ant genera [5, 8], which have evolved some of the fastest known movements in the animal kingdom [6]. However, power-amplified predatory strikes were not previously known in one of the largest animal classes, the arachnids. Mecysmaucheniidae spiders, which occur only in New Zealand and southern South America, are tiny, cryptic, ground-dwelling spiders that rely on hunting rather than web-building to capture prey [9]. Analysis of high-speed video revealed that power-amplified mechanisms occur in some mecysmaucheniid species, with the fastest species being two orders of magnitude faster than the slowest species. Molecular phylogenetic analysis revealed that power-amplified cheliceral strikes have evolved four times independently within the family. Furthermore, we identified morphological innovations that directly relate to cheliceral function: a highly modified carapace in which the cheliceral muscles are oriented horizontally; modification of a cheliceral sclerite to have muscle attachments; and, in the power-amplified species, a thicker clypeus and clypeal apodemes. These structural innovations may have set the stage for the parallel evolution of ballistic predatory strikes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Delivering Science on Day One

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Timothy J.

    2016-03-01

    While benchmarking software is useful for testing the performance limits and stability of Argonne National Laboratory’s new Theta supercomputer, there is no substitute for running real applications to explore the system’s potential. The Argonne Leadership Computing Facility’s Theta Early Science Program, modeled after its highly successful code migration program for the Mira supercomputer, has one primary aim: to deliver science on day one. Here is a closer look at the type of science problems that will be getting early access to Theta, a next-generation machine being rolled out this year.

  4. Supercomputer analysis of sedimentary basins.

    PubMed

    Bethke, C M; Altaner, S P; Harrison, W J; Upson, C

    1988-01-15

    Geological processes of fluid transport and chemical reaction in sedimentary basins have formed many of the earth's energy and mineral resources. These processes can be analyzed on natural time and distance scales with the use of supercomputers. Numerical experiments are presented that give insights to the factors controlling subsurface pressures, temperatures, and reactions; the origin of ores; and the distribution and quality of hydrocarbon reservoirs. The results show that numerical analysis combined with stratigraphic, sea level, and plate tectonic histories provides a powerful tool for studying the evolution of sedimentary basins over geologic time.

  5. Discover Supercomputer 3

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  6. Discover Supercomputer 2

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  7. Discover Supercomputer 1

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  8. Development of the general interpolants method for the CYBER 200 series of supercomputers

    NASA Technical Reports Server (NTRS)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  9. The Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Littman, M. G.

    1986-01-01

    The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.

  10. Merging the Machines of Modern Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Laura; Collins, Jim

    Two recent projects have harnessed supercomputing resources at the US Department of Energy’s Argonne National Laboratory in a novel way to support major fusion science and particle collider experiments. Using leadership computing resources, one team ran fine-grid analysis of real-time data to make near-real-time adjustments to an ongoing experiment, while a second team is working to integrate Argonne’s supercomputers into the Large Hadron Collider/ATLAS workflow. Together these efforts represent a new paradigm of the high-performance computing center as a partner in experimental science.

  11. Release of metronidazole from electrospun poly(L-lactide-co-D/L-lactide) fibers for local periodontitis treatment.

    PubMed

    Reise, Markus; Wyrwa, Ralf; Müller, Ulrike; Zylinski, Matthias; Völpel, Andrea; Schnabelrauch, Matthias; Berg, Albrecht; Jandt, Klaus D; Watts, David C; Sigusch, Bernd W

    2012-02-01

    We aimed to achieve detailed biomaterials characterization of a drug delivery system for local periodontitis treatment based on electrospun metronidazole-loaded resorbable polylactide (PLA) fibers. PLA fibers loaded with 0.1-40% (w/w) MNA were electrospun and were characterized by SEM and DSC. HPLC techniques were used to analyze the release profiles of metronidazole (MNA) from these fibers. The antibacterial efficacy was determined by measuring inhibition zones of drug-containing aliquots from the same electrospun fiber mats in an agar diffusion test. Three pathogenic periodontal bacterial strains: Fusobacterium nucleatum, Aggregatibacter actinomycetemcomitans and Porphyromonas gingivalis were studied. Cytotoxicity testing was performed with human gingival fibroblasts by: (i) counting viable cells via live/dead staining methods and (ii) by exposing cells directly onto the surface of MNA-loaded fibers. MNA concentration influenced fiber diameters and thus w/w surface areas: diameter being minimal and area maximal at 20% MNA. HPLC showed that these 20% MNA fibers had the fastest initial MNA release. From the third day, MNA release was slower and nearly linear with time. All fiber mats released 32-48% of their total drug content within the first 7 days. Aliquots of media taken from the fiber mats inhibited the growth of all three bacterial strains. MNA released up to the 28th day from fiber mats containing 40% MNA significantly decreased the viability of F. nucleatum and P. gingivalis and up to the 2nd day also for the resistant A. actinomycetemcomitans. All of the investigated fibers and aliquots showed excellent cytocompatibility. This study shows that MNA-loaded electrospun fiber mats represent an interesting class of resorbable drug delivery systems. Sustained drug release properties and cytocompatibility suggest their potential clinical applicability for the treatment of periodontal diseases. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  12. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    PubMed

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  13. Nitrate uptake and nitrite release by tomato roots in response to anoxia.

    PubMed

    Morard, Philippe; Silvestre, Jérôme; Lacoste, Ludovic; Caumes, Edith; Lamaze, Thierry

    2004-07-01

    Excised root systems of tomato plants (early fruiting stage, 2nd flush) were subjected to a gradual transition from normoxia to anoxia by seating the hydroponic root medium while aeration was stopped. Oxygen level in the medium and respiration rate decreased and reached very low values after 12 h of treatment, indicating that the tissues were anoxic thereafter. Nitrate loss from the nutrient solution was strongly stimulated by anoxia (after 26 h) concomitantly with a release of nitrite starting only after 16 h of treatment. This effect was not observed in the absence of roots or in the presence of tungstate, but occurred with whole plants or with sterile in vitro cultured root tissues. These results indicate that biochemical processes in the root involve nitrate reductase. NR activity assayed in tomato roots increased during anoxia. This phenomenon appeared in intact plants and in root tissues of detopped plants. The stimulating effect of oxygen deprivation on nitrate uptake was specific; anoxia simultaneously entailed a release of orthophosphate, sulfate, and potassium by the roots. Anoxia enhanced nitrate reduction by root tissues, and nitrite ions were released into xylem sap and into medium culture. In terms of the overall balance, the amount of nitrite recovered represented only half of the amount of nitrate utilized. Nitrite reduction into nitric oxide and perhaps into nitrogen gas could account for this discrepancy. These results appear to be the first report of an increase in nitrate uptake by plant roots under anoxia of tomato at the early fruiting stage, and the rates of nitrite release in nutrient medium by the asphyxiated roots are the fastest yet reported.

  14. Sex Difference in Draft-Legal Ultra-Distance Events - A Comparison between Ultra-Swimming and Ultra-Cycling.

    PubMed

    Salihu, Lejla; Rüst, Christoph Alexander; Rosemann, Thomas; Knechtle, Beat

    2016-04-30

    Recent studies reported that the sex difference in performance in ultra-endurance sports such as swimming and cycling changed over the years. However, the aspect of drafting in draft-legal ultra-endurance races has not yet been investigated. This study investigates the sex difference in ultra-swimming and ultra-cycling draft-legal races where drafting - swimming or cycling behind other participants to save energy and have more power at the end of the race to overtake them, is allowed. The change in performance of the annual best and the annual three best in an ultra-endurance swimming race (16-km 'Faros Swim Marathon') over 38 years and in a 24-h ultra-cycling race ('World Cycling Race') over 13 years were compared and analysed with respect to sex difference. Furthermore, performances of the fastest female and male finishers ever were compared. In the swimming event, the sex difference of the annual best male and female decreased non-significantly (P = 0.262) from 5.3% (1976) to 1.0% (2013). The sex gap of speed in the annual three fastest swimmers decreased significantly (P = 0.043) from 5.9 ± 1.6% (1979) to 4.7 ± 3.1% (2013). In the cycling event, the difference in cycling speed between the annual best male and female decreased significantly (P = 0.026) from 33.31% (1999) to 10.89% (2011). The sex gap of speed in the annual three fastest decreased significantly (P = 0.001) from 32.9 ± 0.6% (1999) to 16.4 ± 5.9% (2011). The fastest male swimmer ever (swimming speed 5.3 km/h, race time: 03:01:55 h:min:s) was 1.5% faster than the fastest female swimmer (swimming speed 5.2 km/h, race time: 03:04:09 h:min:s). The three fastest male swimmers ever (mean 5.27 ± 0.13 km/h) were 4.4% faster than the three fastest female swimmers (mean 5.05 ± 0.20 km/h) (P < 0.05). In the cycling event, the best male ever (cycling speed 45.8 km/h) was 26.4% faster than the best female (cycling speed 36.1 km/h). The three fastest male cyclists ever (45.9 km/h) (mean 45.85 ± 0.05 km/h) were 32.1% faster (P < 0.05) than the three fastest female cyclists ever (34.7 km/h) (mean 34.70 ± 1.87 km/h). In summary, in draft-legal ultra-distance events such as swimming and cycling, the sex difference in the annual top and annual top three swimmers and cyclists decreased (i.e. non-linearly in swimmers and linearly in cyclists) over the years. The sex difference of the fastest athletes ever was smaller in swimming (1.5%) than in cycling (26.4%). This finding is different from reports about races where drafting was not possible or even prohibited and where the sex difference remained stable over years.

  15. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  16. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    PubMed

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  17. Trends in the Electron Microscopy Data Bank (EMDB).

    PubMed

    Patwardhan, Ardan

    2017-06-01

    Recent technological advances, such as the introduction of the direct electron detector, have transformed the field of cryo-EM and the landscape of molecular and cellular structural biology. This study analyses these trends from the vantage point of the Electron Microscopy Data Bank (EMDB), the public archive for three-dimensional EM reconstructions. Over 1000 entries were released in 2016, representing almost a quarter of the total number of entries (4431). Structures at better than 6 Å resolution now represent one of the fastest-growing categories, while the share of annually released tomography-related structures is approaching 20%. The use of direct electron detectors is growing very rapidly: they were used for 70% of the structures released in 2016, in contrast to none before 2011. Microscopes from FEI have an overwhelming lead in terms of usage, and the use of the RELION software package continues to grow rapidly after having attained a leading position in the field. China is rapidly emerging as a major player in the field, supplementing the US, Germany and the UK as the big four. Similarly, Tsinghua University ranks only second to the MRC Laboratory for Molecular Biology in terms of involvement in publications associated with cryo-EM structures at better than 4 Å resolution. Overall, the numbers point to a rapid democratization of the field, with more countries and institutes becoming involved.

  18. Chitosan-based hydrogel tissue scaffolds made by 3D plotting promotes osteoblast proliferation and mineralization.

    PubMed

    Liu, I-Hsin; Chang, Shih-Hsin; Lin, Hsin-Yi

    2015-05-13

    A 3D plotting system was used to make chitosan-based tissue scaffolds with interconnected pores using pure chitosan (C) and chitosan cross-linked with pectin (CP) and genipin (CG). A freeze-dried chitosan scaffold (CF/D) was made to compare with C, to observe the effects of structural differences. The fiber size, pore size, porosity, compression strength, swelling ratio, drug release efficacy, and cumulative weight loss of the scaffolds were measured. Osteoblasts were cultured on the scaffolds and their proliferation, type I collagen production, alkaline phosphatase activity, calcium deposition, and morphology were observed. C had a lower swelling ratio, degradation, porosity and drug release efficacy and a higher compressional stiffness and cell proliferation compared to CF/D (p < 0.05). Of the 3D-plotted samples, cells on CP exhibited the highest degree of mineralization after 21 d (p < 0.05). CP also had the highest swelling ratio and fastest drug release, followed by C and CG (p < 0.05). Both CP and CG were stiffer and degraded more slowly in saline solution than C (p < 0.05). In summary, 3D-plotted scaffolds were stronger, less likely to degrade and better promoted osteoblast cell proliferation in vitro compared to the freeze-dried scaffolds. C, CP and CG were structurally similar, and the different crosslinking caused significant changes in their physical and biological performances.

  19. Microsphere based improved sunscreen formulation of ethylhexyl methoxycinnamate.

    PubMed

    Gogna, Deepak; Jain, Sunil K; Yadav, Awesh K; Agrawal, G P

    2007-04-01

    Polymethylmethacrylate (PMMA) microspheres of ethylhexyl methoxycinnamate (EHM) were prepared by emulsion solvent evaporation method to improve its photostability and effectiveness as sunscreening agent. Process parameters like stirring speed and aqueous polyvinyl alcohol (PVA) concentration were analyzed in order to optimize the formulations. Shape and surface morphology of the microspheres were examined using scanning electron microscopy. Particle size of the microspheres was determined using laser diffraction particle size analyzer. The PMMA microspheres of EHM were incorporated in water-removable cream base. The in vitro drug release of EHM in pH 7.4 was performed using dialysis membrane. Thin layer chromatography was performed to determine photostability of EHM inside the microspheres. The formulations were evaluated for sun protection factor (SPF) and minimum erythema dose (MED) in albino rats. Cream base formulation containing microspheres prepared using EHM:PMMA in ratio of 1:3 (C(3)) showed slowest drug (EHM) release and those prepared with EHM: PMMA in ratio of 1:1 showed fastest release. The cream base formulations containing EHM loaded microspheres had shown better SPF (more than 16.0) as compared to formulation C(d) that contained 3% free EHM as sunscreen agent and showed SPF 4.66. These studies revealed that the incorporation of EHM loaded PMMA microspheres into cream base had greatly increased the efficacy of sunscreen formulation approximately four times. Further, photostability was also shown to be improved in PMMA microspheres.

  20. Trends in the Electron Microscopy Data Bank (EMDB)

    PubMed Central

    Patwardhan, Ardan

    2017-01-01

    Recent technological advances, such as the introduction of the direct electron detector, have transformed the field of cryo-EM and the landscape of molecular and cellular structural biology. This study analyses these trends from the vantage point of the Electron Microscopy Data Bank (EMDB), the public archive for three-dimensional EM reconstructions. Over 1000 entries were released in 2016, representing almost a quarter of the total number of entries (4431). Structures at better than 6 Å resolution now represent one of the fastest-growing categories, while the share of annually released tomography-related structures is approaching 20%. The use of direct electron detectors is growing very rapidly: they were used for 70% of the structures released in 2016, in contrast to none before 2011. Microscopes from FEI have an overwhelming lead in terms of usage, and the use of the RELION software package continues to grow rapidly after having attained a leading position in the field. China is rapidly emerging as a major player in the field, supplementing the US, Germany and the UK as the big four. Similarly, Tsinghua University ranks only second to the MRC Laboratory for Molecular Biology in terms of involvement in publications associated with cryo-EM structures at better than 4 Å resolution. Overall, the numbers point to a rapid democratization of the field, with more countries and institutes becoming involved. PMID:28580912

  1. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  2. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  3. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  4. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Optical clock distribution in supercomputers using polyimide-based waveguides

    NASA Astrophysics Data System (ADS)

    Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.

    1999-04-01

    Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  6. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  7. Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William

    1986-01-01

    The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.

  8. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  9. Gelatin Methacrylate Microspheres for Growth Factor Controlled Release

    PubMed Central

    Nguyen, Anh H.; McKinney, Jay; Miller, Tobias; Bongiorno, Tom; McDevitt, Todd C.

    2014-01-01

    Gelatin has been commonly used as a delivery vehicle for various biomolecules for tissue engineering and regenerative medicine applications due to its simple fabrication methods, inherent electrostatic binding properties, and proteolytic degradability. Compared to traditional chemical cross-linking methods, such as the use of glutaraldehyde (GA), methacrylate modification of gelatin offers an alternative method to better control the extent of hydrogel cross-linking. Here we examined the physical properties and growth factor delivery of gelatin methacrylate (GMA) microparticles formulated with a wide range of different cross-linking densities (15–90%). Less methacrylated MPs had decreased elastic moduli and larger mesh sizes compared to GA MPs, with increasing methacrylation correlating to greater moduli and smaller mesh sizes. As expected, an inverse correlation between microparticle cross-linking density and degradation was observed, with the lowest cross-linked GMA MPs degrading at the fastest rate, comparable to GA MPs. Interestingly, GMA MPs at lower cross-linking densities could be loaded with up to a 10-fold higher relative amount of growth factor over conventional GA cross-linked MPs, despite an order of magnitude greater gelatin content of GA MPs. Moreover, a reduced GMA cross-linking density resulted in more complete release of bone morphogenic protein 4 (BMP4) and basic fibroblast growth factor (bFGF) and accelerated release rate with collagenase treatment. These studies demonstrate that GMA MPs provide a more flexible platform for growth factor delivery by enhancing the relative binding capacity and permitting proteolytic degradation tunability, thereby offering a more potent controlled release system for growth factor delivery. PMID:25463489

  10. Opportunities for leveraging OS virtualization in high-end supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  11. Designing a connectionist network supercomputer.

    PubMed

    Asanović, K; Beck, J; Feldman, J; Morgan, N; Wawrzynek, J

    1993-12-01

    This paper describes an effort at UC Berkeley and the International Computer Science Institute to develop a supercomputer for artificial neural network applications. Our perspective has been strongly influenced by earlier experiences with the construction and use of a simpler machine. In particular, we have observed Amdahl's Law in action in our designs and those of others. These observations inspire attention to many factors beyond fast multiply-accumulate arithmetic. We describe a number of these factors along with rough expressions for their influence and then give the applications targets, machine goals and the system architecture for the machine we are currently designing.

  12. Building black holes: supercomputer cinema.

    PubMed

    Shapiro, S L; Teukolsky, S A

    1988-07-22

    A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.

  13. Supercomputer analysis of purine and pyrimidine metabolism leading to DNA synthesis.

    PubMed

    Heinmets, F

    1989-06-01

    A model-system is established to analyze purine and pyrimidine metabolism leading to DNA synthesis. The principal aim is to explore the flow and regulation of terminal deoxynucleoside triophosphates (dNTPs) in various input and parametric conditions. A series of flow equations are established, which are subsequently converted to differential equations. These are programmed (Fortran) and analyzed on a Cray chi-MP/48 supercomputer. The pool concentrations are presented as a function of time in conditions in which various pertinent parameters of the system are modified. The system is formulated by 100 differential equations.

  14. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  15. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  16. When Black Holes Collide

    NASA Technical Reports Server (NTRS)

    Baker, John

    2010-01-01

    Among the fascinating phenomena predicted by General Relativity, Einstein's theory of gravity, black holes and gravitational waves, are particularly important in astronomy. Though once viewed as a mathematical oddity, black holes are now recognized as the central engines of many of astronomy's most energetic cataclysms. Gravitational waves, though weakly interacting with ordinary matter, may be observed with new gravitational wave telescopes, opening a new window to the universe. These observations promise a direct view of the strong gravitational dynamics involving dense, often dark objects, such as black holes. The most powerful of these events may be merger of two colliding black holes. Though dark, these mergers may briefly release more energy that all the stars in the visible universe, in gravitational waves. General relativity makes precise predictions for the gravitational-wave signatures of these events, predictions which we can now calculate with the aid of supercomputer simulations. These results provide a foundation for interpreting expect observations in the emerging field of gravitational wave astronomy.

  17. Neuropeptide Signaling Networks and Brain Circuit Plasticity.

    PubMed

    McClard, Cynthia K; Arenkiel, Benjamin R

    2018-01-01

    The brain is a remarkable network of circuits dedicated to sensory integration, perception, and response. The computational power of the brain is estimated to dwarf that of most modern supercomputers, but perhaps its most fascinating capability is to structurally refine itself in response to experience. In the language of computers, the brain is loaded with programs that encode when and how to alter its own hardware. This programmed "plasticity" is a critical mechanism by which the brain shapes behavior to adapt to changing environments. The expansive array of molecular commands that help execute this programming is beginning to emerge. Notably, several neuropeptide transmitters, previously best characterized for their roles in hypothalamic endocrine regulation, have increasingly been recognized for mediating activity-dependent refinement of local brain circuits. Here, we discuss recent discoveries that reveal how local signaling by corticotropin-releasing hormone reshapes mouse olfactory bulb circuits in response to activity and further explore how other local neuropeptide networks may function toward similar ends.

  18. Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1999-01-01

    The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.

  19. YOUR TALENTS--LET'S NOT WASTE THEM.

    ERIC Educational Resources Information Center

    KEYSERLING, MARY DUBLIN

    AMERICAN WOMAN POWER NEEDS TO BE MORE FULLY UTILIZED TO MEET THE NATION'S MANPOWER REQUIREMENTS. PROFESSIONAL AND TECHNICAL OCCUPATIONS ARE THE FASTEST GROWING CAREER FIELDS, AND MEN ALONE CANNOT MEET THEIR MANPOWER DEMANDS. CLERICAL WORK AND SERVICE OCCUPATIONS ARE EXPECTED TO SHOW THE SECOND AND FASTEST RATE OF GROWTH. SALES OCCUPATIONS ARE ALSO…

  20. Opposed-flow Flame Spread Over Solid Fuels in Microgravity: the Effect of Confined Spaces

    NASA Astrophysics Data System (ADS)

    Wang, Shuangfeng; Hu, Jun; Xiao, Yuan; Ren, Tan; Zhu, Feng

    2015-09-01

    Effects of confined spaces on flame spread over thin solid fuels in a low-speed opposing flow is investigated by combined use of microgravity experiments and computations. The flame behaviors are observed to depend strongly on the height of the flow tunnel. In particular, a non-monotonic trend of flame spread rate versus tunnel height is found, with the fastest flame occurring in the 3 cm high tunnel. The flame length and the total heat release rate from the flame also change with tunnel height, and a faster flame has a larger length and a higher heat release rate. The computation analyses indicate that a confined space modifies the flow around the spreading flame. The confinement restricts the thermal expansion and accelerates the flow in the streamwise direction. Above the flame, the flow deflects back from the tunnel wall. This inward flow pushes the flame towards the fuel surface, and increases oxygen transport into the flame. Such a flow modification explains the variations of flame spread rate and flame length with tunnel height. The present results suggest that the confinement effects on flame behavior in microgravity should be accounted to assess accurately the spacecraft fire hazard.

  1. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  2. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  3. The series-elastic shock absorber: tendons attenuate muscle power during eccentric actions

    PubMed Central

    Azizi, Emanuel

    2010-01-01

    Elastic tendons can act as muscle power amplifiers or energy-conserving springs during locomotion. We used an in situ muscle-tendon preparation to examine the mechanical function of tendons during lengthening contractions, when muscles absorb energy. Force, length, and power were measured in the lateral gastrocnemius muscle of wild turkeys. Sonomicrometry was used to measure muscle fascicle length independently from muscle-tendon unit (MTU) length, as measured by a muscle lever system (servomotor). A series of ramp stretches of varying velocities was applied to the MTU in fully activated muscles. Fascicle length changes were decoupled from length changes imposed on the MTU by the servomotor. Under most conditions, muscle fascicles shortened on average, while the MTU lengthened. Energy input to the MTU during the fastest lengthenings was −54.4 J/kg, while estimated work input to the muscle fascicles during this period was only −11.24 J/kg. This discrepancy indicates that energy was first absorbed by elastic elements, then released to do work on muscle fascicles after the lengthening phase of the contraction. The temporary storage of energy by elastic elements also resulted in a significant attenuation of power input to the muscle fascicles. At the fastest lengthening rates, peak instantaneous power input to the MTU reached −2,143.9 W/kg, while peak power input to the fascicles was only −557.6 W/kg. These results demonstrate that tendons may act as mechanical buffers by limiting peak muscle forces, lengthening rates, and power inputs during energy-absorbing contractions. PMID:20507964

  4. MOVANAID: An Interactive Aid for Analysis of Movement Capabilities.

    ERIC Educational Resources Information Center

    Cooper, George E.; And Others

    A computer-drive interactive aid for movement analysis, called MOVANAID, has been developed to be of assistance in the performance of certain Army intelligence processing tasks in a tactical environment. It can compute fastest travel times and paths through road networks for military units of various types, as well as fastest times in which…

  5. 76 FR 61622 - Potential Closing of Morses Line Border Crossing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-05

    ... travelers would need to travel to an alternate crossing which could cost them both time and money. CBP does... measured the distance and estimated time for each combination assuming they could not travel through Morses Line. By comparing the distance and travel time for the fastest route to those for the fastest route...

  6. Height Growth of Mahogany Seedlings

    Treesearch

    C. B. Briscoe; R. W. Nobles

    1962-01-01

    Since the recognition of natural hybridization of small-leaf (West Indies) mahogany (Swietenia mahagoni Jacq.) with bigleaf (Honduras) mahogany (S. macrophylla King) there has been conjecture about their relative growth rates. One would expect small-leaf to be the fastest growing on dry sites, the hybrids to be fastest on intermediate sites, and bigleaf to excel on wet...

  7. Monitoring Object Library Usage and Changes

    NASA Technical Reports Server (NTRS)

    Owen, R. K.; Craw, James M. (Technical Monitor)

    1995-01-01

    The NASA Ames Numerical Aerodynamic Simulation program Aeronautics Consolidated Supercomputing Facility (NAS/ACSF) supercomputing center services over 1600 users, and has numerous analysts with root access. Several tools have been developed to monitor object library usage and changes. Some of the tools do "noninvasive" monitoring and other tools implement run-time logging even for object-only libraries. The run-time logging identifies who, when, and what is being used. The benefits are that real usage can be measured, unused libraries can be discontinued, training and optimization efforts can be focused at those numerical methods that are actually used. An overview of the tools will be given and the results will be discussed.

  8. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    PubMed

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  9. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  10. The transition of a real-time single-rotor helicopter simulation program to a supercomputer

    NASA Technical Reports Server (NTRS)

    Martinez, Debbie

    1995-01-01

    This report presents the conversion effort and results of a real-time flight simulation application transition to a CONVEX supercomputer. Enclosed is a detailed description of the conversion process and a brief description of the Langley Research Center's (LaRC) flight simulation application program structure. Currently, this simulation program may be configured to represent Sikorsky S-61 helicopter (a five-blade, single-rotor, commercial passenger-type helicopter) or an Army Cobra helicopter (either the AH-1 G or AH-1 S model). This report refers to the Sikorsky S-61 simulation program since it is the most frequently used configuration.

  11. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    PubMed

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  12. Sequence search on a supercomputer.

    PubMed

    Gotoh, O; Tagashira, Y

    1986-01-10

    A set of programs was developed for searching nucleic acid and protein sequence data bases for sequences similar to a given sequence. The programs, written in FORTRAN 77, were optimized for vector processing on a Hitachi S810-20 supercomputer. A search of a 500-residue protein sequence against the entire PIR data base Ver. 1.0 (1) (0.5 M residues) is carried out in a CPU time of 45 sec. About 4 min is required for an exhaustive search of a 1500-base nucleotide sequence against all mammalian sequences (1.2M bases) in Genbank Ver. 29.0. The CPU time is reduced to about a quarter with a faster version.

  13. Science & Technology Review November 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radousky, H

    This months issue has the following articles: (1) Expanded Supercomputing Maximizes Scientific Discovery--Commentary by Dona Crawford; (2) Thunder's Power Delivers Breakthrough Science--Livermore's Thunder supercomputer allows researchers to model systems at scales never before possible. (3) Extracting Key Content from Images--A new system called the Image Content Engine is helping analysts find significant but hard-to-recognize details in overhead images. (4) Got Oxygen?--Oxygen, especially oxygen metabolism, was key to evolution, and a Livermore project helps find out why. (5) A Shocking New Form of Laserlike Light--According to research at Livermore, smashing a crystal with a shock wave can result in coherent light.

  14. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  15. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  16. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    PubMed

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  17. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  18. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports tomore » slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.« less

  19. Computing with Beowulf

    NASA Technical Reports Server (NTRS)

    Cohen, Jarrett

    1999-01-01

    Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.

  20. Compute Server Performance Results

    NASA Technical Reports Server (NTRS)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  1. 1993 Gordon Bell Prize Winners

    NASA Technical Reports Server (NTRS)

    Karp, Alan H.; Simon, Horst; Heller, Don; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The Gordon Bell Prize recognizes significant achievements in the application of supercomputers to scientific and engineering problems. In 1993, finalists were named for work in three categories: (1) Performance, which recognizes those who solved a real problem in the quickest elapsed time. (2) Price/performance, which encourages the development of cost-effective supercomputing. (3) Compiler-generated speedup, which measures how well compiler writers are facilitating the programming of parallel processors. The winners were announced November 17 at the Supercomputing 93 conference in Portland, Oregon. Gordon Bell, an independent consultant in Los Altos, California, is sponsoring $2,000 in prizes each year for 10 years to promote practical parallel processing research. This is the sixth year of the prize, which Computer administers. Something unprecedented in Gordon Bell Prize competition occurred this year: A computer manufacturer was singled out for recognition. Nine entries reporting results obtained on the Cray C90 were received, seven of the submissions orchestrated by Cray Research. Although none of these entries showed sufficiently high performance to win outright, the judges were impressed by the breadth of applications that ran well on this machine, all nine running at more than a third of the peak performance of the machine.

  2. Trinity to Trinity 1945-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moniz, Ernest; Carr, Alan; Bethe, Hans

    The Trinity Test of July 16, 1945 was the first full-scale, real-world test of a nuclear weapon; with the new Trinity supercomputer Los Alamos National Laboratory's goal is to do this virtually, in 3D. Trinity was the culmination of a fantastic effort of groundbreaking science and engineering by hundreds of men and women at Los Alamos and other Manhattan Project sites. It took them less than two years to change the world. The Laboratory is marking the 70th anniversary of the Trinity Test because it not only ushered in the Nuclear Age, but with it the origin of today’s advancedmore » supercomputing. We live in the Age of Supercomputers due in large part to nuclear weapons science here at Los Alamos. National security science, and nuclear weapons science in particular, at Los Alamos National Laboratory have provided a key motivation for the evolution of large-scale scientific computing. Beginning with the Manhattan Project there has been a constant stream of increasingly significant, complex problems in nuclear weapons science whose timely solutions demand larger and faster computers. The relationship between national security science at Los Alamos and the evolution of computing is one of interdependence.« less

  3. Improving Memory Error Handling Using Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less

  4. Cots Correlator Platform

    NASA Astrophysics Data System (ADS)

    Schaaf, Kjeld; Overeem, Ruud

    2004-06-01

    Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.

  5. Trinity to Trinity 1945-2015

    ScienceCinema

    Moniz, Ernest; Carr, Alan; Bethe, Hans; Morrison, Phillip; Ramsay, Norman; Teller, Edward; Brixner, Berlyn; Archer, Bill; Agnew, Harold; Morrison, John

    2018-01-16

    The Trinity Test of July 16, 1945 was the first full-scale, real-world test of a nuclear weapon; with the new Trinity supercomputer Los Alamos National Laboratory's goal is to do this virtually, in 3D. Trinity was the culmination of a fantastic effort of groundbreaking science and engineering by hundreds of men and women at Los Alamos and other Manhattan Project sites. It took them less than two years to change the world. The Laboratory is marking the 70th anniversary of the Trinity Test because it not only ushered in the Nuclear Age, but with it the origin of today’s advanced supercomputing. We live in the Age of Supercomputers due in large part to nuclear weapons science here at Los Alamos. National security science, and nuclear weapons science in particular, at Los Alamos National Laboratory have provided a key motivation for the evolution of large-scale scientific computing. Beginning with the Manhattan Project there has been a constant stream of increasingly significant, complex problems in nuclear weapons science whose timely solutions demand larger and faster computers. The relationship between national security science at Los Alamos and the evolution of computing is one of interdependence.

  6. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  7. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  9. Math and Science Gateways to California's Fastest Growing Careers

    ERIC Educational Resources Information Center

    EdSource, 2008

    2008-01-01

    Some students--and parents--think math and science are not too important for their future. As everyday life becomes more dependent on technology, most people will need a better background in math and science to succeed in today's global economy. To get high-paying jobs in some of California's fastest-growing occupations, a strong background in…

  10. Employment, Salary and Placement Information Related to Career Programs at Johnson County Community College.

    ERIC Educational Resources Information Center

    Conklin, Karen A.

    Johnson County Community College (JCCC), in Kansas, offers formal career programs for 12 of the 20 fastest growing occupations requiring postsecondary training, and for 13 of the 30 occupations projected to be the fastest growing between 1990 and 2005. Following an introduction to general trends and data sources, this guide presents profiles of…

  11. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.

  12. Environmental Influences on Kelp Performance across the Reproductive Period: An Ecological Trade-Off between Gametophyte Survival and Growth?

    PubMed Central

    Mohring, Margaret B.; Kendrick, Gary A.; Wernberg, Thomas; Rule, Michael J.; Vanderklift, Mathew A.

    2013-01-01

    Most kelps (order Laminariales) exhibit distinct temporal patterns in zoospore production, gametogenesis and gametophyte reproduction. Natural fluctuations in ambient environmental conditions influence the intrinsic characteristics of gametes, which define their ability to tolerate varied conditions. The aim of this work was to document seasonal patterns in reproduction and gametophyte growth and survival of Ecklonia radiata (C. Agardh) J. Agardh in south-western Australia. These results were related to patterns in local environmental conditions in an attempt to ascertain which factors explain variation throughout the season. E. radiata was fertile (produced zoospores) for three and a half months over summer and autumn. Every two weeks during this time, gametophytes were grown in a range of temperatures (16–22°C) in the laboratory. Zoospore densities were highly variable among sample periods; however, zoospores released early in the season produced gametophytes which had greater rates of growth and survival, and these rates declined towards the end of the reproductive season. Growth rates of gametophytes were positively related to day length, with the fastest growing recruits released when the days were longest. Gametophytes consistently survived best in the lowest temperature (16°C), yet exhibited optimum growth in higher culture temperatures (20–22°C). These results suggest that E. radiata releases gametes when conditions are favourable for growth, and E. radiata gametophytes are tolerant of the range of temperatures observed at this location. E. radiata releases the healthiest gametophytes when day length and temperature conditions are optimal for better germination, growth, and sporophyte production, perhaps as a mechanism to help compete against other species for space and other resources. PMID:23755217

  13. The Lysis of Pathogenic Escherichia coli by Bacteriophages Releases Less Endotoxin Than by β-Lactams.

    PubMed

    Dufour, Nicolas; Delattre, Raphaëlle; Ricard, Jean-Damien; Debarbieux, Laurent

    2017-06-01

    Other than numerous experimental data assessing phage therapy efficacy, questions regarding safety of this approach are not sufficiently addressed. In particular, as phages can kill bacterial cells within <10 minutes, the associated endotoxin release (ER) in severe infections caused by gram-negative bacteria could be a matter of concern. Two therapeutic virulent phages and 4 reference antibiotics were studied in vitro for their ability to kill 2 pathogenic strains of Escherichia coli and generate an ER. The early interaction (first 3 hours) between these actors was assessed over time by studying the instantaneous cell viability, the colony-forming unit count, the concentration of free endotoxin released, and the cell morphology under light microscope. While β-lactams have a relatively slow effect, both tested phages, as well as amikacin, were able to rapidly abolish the bacterial growth. Even when considering the fastest phage (cell lysis in 9 minutes), the concentrations of phage-induced ER never reached the highest values, which were recorded with antibiotic treatments. Cumulative concentrations of endotoxin over time in phage-treated conditions were lower than those observed with β-lactams and close to those observed with amikacin. Whereas β-lactams were responsible for strong cell morphology changes (spheroplast with imipenem, filamentous cells with cefoxitin and ceftriaxone), amikacin and phages did not modify cell shape but produced intracellular inclusion bodies. This work provides important and comforting data regarding the safety of phage therapy. Therapeutically relevant phages, with their low endotoxin release profile and fast bactericidal effect, are not inferior to β-lactams. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  14. Assessment of different polymers and drug loads for fused deposition modeling of drug loaded implants.

    PubMed

    Kempin, Wiebke; Franz, Christian; Koster, Lynn-Christine; Schneider, Felix; Bogdahn, Malte; Weitschies, Werner; Seidlitz, Anne

    2017-06-01

    The 3D printing technique of fused deposition modeling® (FDM) has lately come into focus as a potential fabrication technique for pharmaceutical dosage forms and medical devices that allows the preparation of delivery systems with nearly any shape. This is particular promising for implants administered at application sites with a high anatomical variability where an individual shape adaption appears reasonable. In this work different polymers (Eudragit®RS, polycaprolactone (PCL), poly(l-lactide) (PLLA) and ethyl cellulose (EC)) were evaluated with respect to their suitability for FDM of drug loaded implants and their drug release behaviour was evaluated. The fluorescent dye quinine was used as a model drug to visualize drug distribution in filaments and implants. Quinine loaded filaments were produced by solvent casting and subsequent hot melt extrusion (HME) and model implants were printed as hollow cylinders using a standard FDM printer. Parameters were found at which model implants (hollow cylinders, outer diameter 4-5mm, height 3mm) could be produced from all tested polymers. The drug release which was examined by incubation of the printed implants in phosphate buffered saline solution (PBS) pH 7.4 was highly dependent on the used polymer. The fastest relative drug release of approximately 76% in 51days was observed for PCL and the lowest for Eudragit®RS and EC with less than 5% of quinine release in 78 and 100days, respectively. For PCL further filaments were prepared with different quinine loads ranging from 2.5% to 25% and thermal analysis proved the presence of a solid dispersion of quinine in the polymer for all tested concentrations. Increasing the drug load also increased the overall percentage of drug released to the medium since nearly the same absolute amount of quinine remained trapped in PCL at the end of drug release studies. This knowledge is valuable for future developments of printed implants with a desired drug release profile that might be controlled by the choice of the polymer and the drug load. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Time-dependent Mechanisms in Beta-cell Glucose Sensing

    PubMed Central

    Vagn Korsgaard, Thomas

    2006-01-01

    The relation between plasma glucose and insulin release from pancreatic beta-cells is not stationary in the sense that a given glucose concentration leads to a specific rate of insulin secretion. A number of time-dependent mechanisms appear to exist that modify insulin release both on a short and a longer time scale. Typically, two phases are described. The first phase, lasting up to 10 min, is a pulse of insulin release in response to fast changes in glucose concentration. The second phase is a more steady increase of insulin release over minutes to hours, if the elevated glucose concentration is sustained. The paper describes the glucose sensing mechanism via the complex dynamics of the key enzyme glucokinase, which controls the first step in glucose metabolism: phosphorylation of glucose to glucose-6-phosphate. Three time-dependent phenomena (mechanisms) are described. The fastest, corresponding to the first phase, is a delayed negative feedback regulating the glucokinase activity. Due to the delay, a rapid glucose increase will cause a burst of activity in the glucose sensing system, before the glucokinase is down-regulated. The second mechanism corresponds to the translocation of glucokinase from an inactive to an active form. As the translocation is controlled by the product(s) of the glucokinase reaction rather than by the substrate glucose, this mechanism gives a positive, but saturable, feedback. Finally, the release of the insulin granules is assumed to be enhanced by previous glucose exposure, giving a so-called glucose memory to the beta-cells. The effect depends on the insulin release of the cells, and this mechanism constitutes a second positive, saturable feedback system. Taken together, the three phenomena describe most of the glucose sensing behaviour of the beta-cells. The results indicate that the insulin release is not a precise function of the plasma glucose concentration. It rather looks as if the beta-cells just increase the insulin production, until the plasma glucose has returned to normal. This type of integral control has the advantage that the precise glucose sensitivity of the beta-cells is not important for normal glucose homeostasis. PMID:19669468

  16. Preparation of fast response superabsorbent hydrogels by radiation polymerization and crosslinking of N-isopropylacrylamide in solution

    NASA Astrophysics Data System (ADS)

    Abd El-Mohdy, H. L.; Safrany, Agnes

    2008-03-01

    Macroporous temperature-responsive poly( N-isopropylacrylamide) (PNIPAAm) hydrogels with high equilibrium swelling and fast response rates were obtained by a 60Co γ- and electron beam (EB) irradiation of aqueous N-isopropylacrylamide (NIPAAm) monomer solutions. The effect of irradiation temperatures, the dose, the addition of a pore-forming agent on the swelling ratio, and the kinetics of swelling and shrinking of the PNIPAAm gels was studied. The gels synthesized above the LCST exhibited the highest equilibrium swelling (300-400) and fastest response rate measured by minutes. Scanning electron microscope (SEM) pictures revealed that the gels synthesized above the LCST have larger pores than those prepared at temperatures below the LCST. The gels showed a reversible response to cyclical changes in temperature and might be used in a pulsed drug delivery device. The gels synthesized above the LCST exhibited the highest testosterone propionate release.

  17. MOST: a software environment for constraint-based metabolic modeling and strain design.

    PubMed

    Kelley, James J; Lane, Anatoliy; Li, Xiaowei; Mutthoju, Brahmaji; Maor, Shay; Egen, Dennis; Lun, Desmond S

    2015-02-15

    MOST (metabolic optimization and simulation tool) is a software package that implements GDBB (genetic design through branch and bound) in an intuitive user-friendly interface with excel-like editing functionality, as well as implementing FBA (flux balance analysis), and supporting systems biology markup language and comma-separated values files. GDBB is currently the fastest algorithm for finding gene knockouts predicted by FBA to increase production of desired products, but GDBB has only been available on a command line interface, which is difficult to use for those without programming knowledge, until the release of MOST. MOST is distributed for free on the GNU General Public License. The software and full documentation are available at http://most.ccib.rutgers.edu/. dslun@rutgers.edu. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Performance of the dot product function in radiative transfer code SORD

    NASA Astrophysics Data System (ADS)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-10-01

    The successive orders of scattering radiative transfer (RT) codes frequently call the scalar (dot) product function. In this paper, we study performance of some implementations of the dot product in the RT code SORD using 50 scenarios for light scattering in the atmosphere-surface system. In the dot product function, we use the unrolled loops technique with different unrolling factor. We also considered the intrinsic Fortran functions. We show results for two machines: ifort compiler under Windows, and pgf90 under Linux. Intrinsic DOT_PRODUCT function showed best performance for the ifort. For the pgf90, the dot product implemented with unrolling factor 4 was the fastest. The RT code SORD together with the interface that runs all the mentioned tests are publicly available from ftp://maiac.gsfc.nasa.gov/pub/skorkin/SORD_IP_16B (current release) or by email request from the corresponding (first) author.

  19. Cardioprotective peptides from marine sources.

    PubMed

    Harnedy, Padraigín A; FitzGerald, Richard J

    2013-05-01

    Elevated blood pressure or hypertension is one of the fastest growing health problems worldwide. Although the etiology of essential hypertension has a genetic component, dietary factors play an important role. With the high costs and adverse side-effects associated with synthetic antihypertensive drugs and the awareness of the link between diet and health there has been increased focus on identification of food components that may contribute to cardiovascular health. In recent years special interest has been paid to the cardioprotective activity of peptides derived from food proteins including marine proteins. These peptides are latent within the sequence of the parent protein and only become active when released by proteolytic digestion during gastrointestinal digestion or through food processing. Current data on antihypertensive activity of marine-derived protein hydrolysates/peptides in animal and human studies is reviewed herein. Furthermore, products containing protein hydrolysates/peptides from marine origin with antihypertensive effects are discussed.

  20. Influx of Asian Pacific Americans/Veterans in American Universities

    ERIC Educational Resources Information Center

    Bailey, Steven

    2011-01-01

    Asian Pacific Americans (APA's) are one of the fastest growing racial/ethnic groups within the United States and among most of the college student vast population (Escueta and O'Brien, 1995). APA's represented 5.8% of all college students in 1996, an 83.8% gain in population since 1986 (Wilds and Wilson, 1998), and the fastest increase amongst all…

  1. The Effects of Targeted, Connectivism-Based Information Literacy Instruction on Latino Students Information Literacy Skills and Library Usage Behavior

    ERIC Educational Resources Information Center

    Walsh, John

    2013-01-01

    The United States is experiencing a socio-demographic shift in population and education. Latinos are the fastest growing segment of the population on the national level and in higher education. The Latino student population growth rate and Latino college completion rate are not reciprocal. While Latino students are the fastest growing demographic…

  2. Hot Jobs for the 21st Century. Facts on Working Women.

    ERIC Educational Resources Information Center

    Women's Bureau (DOL), Washington, DC.

    Between 1998-2008, women's participation in the labor force is expected to increase by 15 percent and men's, by 10 percent. Two views of growth occupations are those with the largest job growth and those with the fastest growth. Employment in professional specialty occupations will increase the fastest and add the most jobs. Much of this growth is…

  3. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  4. Multi-petascale highly efficient parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less

  5. The TESS science processing operations center

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland

    2016-08-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  6. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  7. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  8. Supercomputer modeling of hydrogen combustion in rocket engines

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Nikitin, V. F.; Altukhov, D. I.; Dushin, V. R.; Koo, Jaye

    2013-08-01

    Hydrogen being an ecological fuel is very attractive now for rocket engines designers. However, peculiarities of hydrogen combustion kinetics, the presence of zones of inverse dependence of reaction rate on pressure, etc. prevents from using hydrogen engines in all stages not being supported by other types of engines, which often brings the ecological gains back to zero from using hydrogen. Computer aided design of new effective and clean hydrogen engines needs mathematical tools for supercomputer modeling of hydrogen-oxygen components mixing and combustion in rocket engines. The paper presents the results of developing verification and validation of mathematical model making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  9. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  10. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    PubMed

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  11. Optimal wavelength-space crossbar switches for supercomputer optical interconnects.

    PubMed

    Roudas, Ioannis; Hemenway, B Roe; Grzybowski, Richard R; Karinou, Fotini

    2012-08-27

    We propose a most economical design of the Optical Shared MemOry Supercomputer Interconnect System (OSMOSIS) all-optical, wavelength-space crossbar switch fabric. It is shown, by analysis and simulation, that the total number of on-off gates required for the proposed N × N switch fabric can scale asymptotically as N ln N if the number of input/output ports N can be factored into a product of small primes. This is of the same order of magnitude as Shannon's lower bound for switch complexity, according to which the minimum number of two-state switches required for the construction of a N × N permutation switch is log2 (N!).

  12. CONVEX mini manual

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.

  13. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2006-01-01

    High-End Computing (HEC) has always played a major role in meeting the modeling and simulation needs of various NASA missions. With NASA's newest 62 teraflops Columbia supercomputer, HEC is having an even greater impact within the Agency and beyond. Significant cutting-edge science and engineering simulations in the areas of space exploration, Shuttle operations, Earth sciences, and aeronautics research, are already occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. The talk will describe how the integrated supercomputing production environment is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions.

  14. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  15. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  16. Beneficial effect of Cu on Ti-Nb-Ta-Zr sputtered uniform/adhesive gum films accelerating bacterial inactivation under indoor visible light.

    PubMed

    Alhussein, Akram; Achache, Sofiane; Deturche, Regis; Sanchette, Frederic; Pulgarin, Cesar; Kiwi, John; Rtimi, Sami

    2017-04-01

    This article presents the evidence for the significant effect of copper accelerating the bacterial inactivation on Ti-Nb-Ta-Zr (TNTZ) sputtered films on glass up to a Cu content of 8.3 at.%. These films were deposited by dc magnetron co-sputtering of an alloy target Ti-23Nb-0.7Ta-2Zr (at.%) and a Cu target. The fastest bacterial inactivation of E. coli on this later TNTZ-Cu surface proceeded within ∼75min. The films deposited by magnetron sputtering are chemically homogenous. The film roughness evaluated by atomic force spectroscopy (AFM) on the TNTZ-Cu 8.3 at.% Cu sample presented an RMS-value of 20.1nm being the highest RMS of any Cu-sputtered TNTZ sample. The implication of the RMS value found for this sample leading to the fastest interfacial bacterial inactivation kinetics is also discussed. Values for the Young's modulus and hardness are reported for the TNTZ films in the presence of various Cu-contents. Evaluation of the bacterial inactivation kinetics of E. coli under low intensity actinic hospital light and in the dark was carried out. The stable repetitive bacterial inactivation was consistent with the extremely low Cu-ion release from the samples of 0.4 ppb. Evidence is presented by the bacterial inactivation dependence on the applied light intensity for the intervention of Cu as semiconductor CuO during the bacterial inactivation at the TNTZ-Cu interface. The mechanism of CuO-intervention under light is suggested based on the pH/and potential changes registered during bacterial disinfection. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data.

    PubMed

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-08-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.

  18. DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data

    PubMed Central

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-01-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089

  19. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Fang, Xiang; Li, Ning-qiu; Fu, Xiao-zhe; Li, Kai-bin; Lin, Qiang; Liu, Li-hui; Shi, Cun-bin; Wu, Shu-qin

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.

  20. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  1. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  2. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less

  3. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  4. The Top 100. The Fastest Growing Careers for the 21st Century. Revised Edition.

    ERIC Educational Resources Information Center

    1998

    This publication presents 100 careers the U.S. Department of Labor and other sources project as the fastest growing through the year 2006. A shaded bar on the bottom of the title page of each article contains a listing of codes for three commonly used government classification systems. Shaded bars at the bottom of other pages provide quick facts.…

  5. KAPS (kinematic assessment of passive stretch): a tool to assess elbow flexor and extensor spasticity after stroke using a robotic exoskeleton.

    PubMed

    Centen, Andrew; Lowrey, Catherine R; Scott, Stephen H; Yeh, Ting-Ting; Mochizuki, George

    2017-06-19

    Spasticity is a common sequela of stroke. Traditional assessment methods include relatively coarse scales that may not capture all characteristics of elevated muscle tone. Thus, the aim of this study was to develop a tool to quantitatively assess post-stroke spasticity in the upper extremity. Ninety-six healthy individuals and 46 individuals with stroke participated in this study. The kinematic assessment of passive stretch (KAPS) protocol consisted of passive elbow stretch in flexion and extension across an 80° range in 5 movement durations. Seven parameters were identified and assessed to characterize spasticity (peak velocity, final angle, creep (or release), between-arm peak velocity difference, between-arm final angle, between-arm creep, and between-arm catch angle). The fastest movement duration (600 ms) was most effective at identifying impairment in each parameter associated with spasticity. A decrease in peak velocity during passive stretch between the affected and unaffected limb was most effective at identifying individuals as impaired. Spasticity was also associated with a decreased passive range (final angle) and a classic 'catch and release' as seen through between-arm catch and creep metrics. The KAPS protocol and robotic technology can provide a sensitive and quantitative assessment of post-stroke elbow spasticity not currently attainable through traditional measures.

  6. The Relationship Between Pitching Mechanics and Injury: A Review of Current Concepts

    PubMed Central

    Chalmers, Peter N.; Wimmer, Markus A.; Verma, Nikhil N.; Cole, Brian J.; Romeo, Anthony A.; Cvetanovich, Gregory L.; Pearl, Michael L.; Chalmers, Peter N.; Wimmer, Markus A.; Verma, Nikhil N.; Cole, Brian J.; Romeo, Anthony A.; Cvetanovich, Gregory L.; Pearl, Michael L.

    2017-01-01

    Context: The overhand pitch is one of the fastest known human motions and places enormous forces and torques on the upper extremity. Shoulder and elbow pain and injury are common in high-level pitchers. A large body of research has been conducted to understand the pitching motion. Evidence Acquisition: A comprehensive review of the literature was performed to gain a full understanding of all currently available biomechanical and clinical evidence surrounding pitching motion analysis. These motion analysis studies use video motion analysis, electromyography, electromagnetic sensors, and markered motion analysis. This review includes studies performed between 1983 and 2016. Study Design: Clinical review. Level of Evidence: Level 5. Results: The pitching motion is a kinetic chain, in which the force generated by the large muscles of the lower extremity and trunk during the wind-up and stride phases are transferred to the ball through the shoulder and elbow during the cocking and acceleration phases. Numerous kinematic factors have been identified that increase shoulder and elbow torques, which are linked to increased risk for injury. Conclusion: Altered knee flexion at ball release, early trunk rotation, loss of shoulder rotational range of motion, increased elbow flexion at ball release, high pitch velocity, and increased pitcher fatigue may increase shoulder and elbow torques and risk for injury. PMID:28107113

  7. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  8. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA

    PubMed Central

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.

    2017-01-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099

  9. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  10. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 2. Explicit Solvent Particle Mesh Ewald.

    PubMed

    Salomon-Ferrer, Romelia; Götz, Andreas W; Poole, Duncan; Le Grand, Scott; Walker, Ross C

    2013-09-10

    We present an implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs. First released publicly in April 2010 as part of version 11 of the AMBER MD package and further improved and optimized over the last two years, this implementation supports the three most widely used statistical mechanical ensembles (NVE, NVT, and NPT), uses particle mesh Ewald (PME) for the long-range electrostatics, and runs entirely on CUDA-enabled NVIDIA graphics processing units (GPUs), providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPU version of AMBER software running on all conventional CPU-based clusters and supercomputers. We briefly discuss three different precision models developed specifically for this work (SPDP, SPFP, and DPDP) and highlight the technical details of the approach as it extends beyond previously reported work [Götz et al., J. Chem. Theory Comput. 2012, DOI: 10.1021/ct200909j; Le Grand et al., Comp. Phys. Comm. 2013, DOI: 10.1016/j.cpc.2012.09.022].We highlight the substantial improvements in performance that are seen over traditional CPU-only machines and provide validation of our implementation and precision models. We also provide evidence supporting our decision to deprecate the previously described fully single precision (SPSP) model from the latest release of the AMBER software package.

  11. Transitioning NWChem to the Next Generation of Manycore Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Apra, E; Kowalski, Karol

    The NorthWest chemistry (NWChem) modeling software is a popular molecular chemistry simulation software that was designed from the start to work on massively parallel processing supercomputers [1-3]. It contains an umbrella of modules that today includes self-consistent eld (SCF), second order Møller-Plesset perturbation theory (MP2), coupled cluster (CC), multiconguration self-consistent eld (MCSCF), selected conguration interaction (CI), tensor contraction engine (TCE) many body methods, density functional theory (DFT), time-dependent density functional theory (TDDFT), real-time time-dependent density functional theory, pseudopotential plane-wave density functional theory (PSPW), band structure (BAND), ab initio molecular dynamics (AIMD), Car-Parrinello molecular dynamics (MD), classical MD, hybrid quantum mechanicsmore » molecular mechanics (QM/MM), hybrid ab initio molecular dynamics molecular mechanics (AIMD/MM), gauge independent atomic orbital nuclear magnetic resonance (GIAO NMR), conductor like screening solvation model (COSMO), conductor-like screening solvation model based on density (COSMO-SMD), and reference interaction site model (RISM) solvation models, free energy simulations, reaction path optimization, parallel in time, among other capabilities [4]. Moreover, new capabilities continue to be added with each new release.« less

  12. THE FASTEST OODA LOOP: THE IMPLICATIONS OF BIG DATA FOR AIR POWER

    DTIC Science & Technology

    2016-06-01

    AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY THE FASTEST OODA LOOP : THE IMPLICATIONS OF BIG DATA FOR AIR POWER by Aaron J. Dove, Maj, USAF A...Use of Big Data Thus Far..............................................................16 The Big Data Boost To The OODA Loop ...processed with enough accuracy that it required minimal to no human or man-in-the loop vetting of the information through Command and Control (C2

  13. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  14. Will Your Next Supercomputer Come from Costco?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way throughmore » the procurement process.« less

  15. Interactive 3D visualization speeds well, reservoir planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinitemore » reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.« less

  16. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  17. Ab initio molecular dynamics simulations for the role of hydrogen in catalytic reactions of furfural on Pd(111)

    NASA Astrophysics Data System (ADS)

    Xue, Wenhua; Dang, Hongli; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2014-03-01

    In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of hydrogen has attracted wide attention. We report ab initio molecular dynamics simulations for furfural and hydrogen on the Pd(111) surface at finite temperatures. The simulations demonstrate that the presence of hydrogen is important in promoting furfural conversion. In particular, hydrogen molecules dissociate rapidly on the Pd(111) surface. As a result of such dissociation, atomic hydrogen participates in the reactions with furfural. The simulations also provide detailed information about the possible reactions of hydrogen with furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.

  18. First-principles quantum-mechanical investigations of biomass conversion at the liquid-solid interfaces

    NASA Astrophysics Data System (ADS)

    Dang, Hongli; Xue, Wenhua; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2014-03-01

    We report first-principles density-functional calculations and ab initio molecular dynamics (MD) simulations for the reactions involving furfural, which is an important intermediate in biomass conversion, at the catalytic liquid-solid interfaces. The different dynamic processes of furfural at the water-Cu(111) and water-Pd(111) interfaces suggest different catalytic reaction mechanisms for the conversion of furfural. Simulations for the dynamic processes with and without hydrogen demonstrate the importance of the liquid-solid interface as well as the presence of hydrogen in possible catalytic reactions including hydrogenation and decarbonylation of furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.

  19. Towards future high performance computing: What will change? How can we be efficient?

    NASA Astrophysics Data System (ADS)

    Düben, Peter

    2017-04-01

    How can we make the most out of "exascale" supercomputers that will be available soon and enable us to calculate an amazing number of 1,000,000,000,000,000,000 real numbers operations within a single second? How do we need to design applications to use these machines efficiently? What are the limits? We will discuss opportunities and limits of the use of future high performance computers from the perspective of Earth System Modelling. We will provide an overview about future challenges and outline how numerical application will need to be changed to run efficiently on supercomputers in the future. We will also discuss how different disciplines can support each other and talk about data handling and numerical precision of data.

  20. The TESS Science Processing Operations Center

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  1. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  2. An analysis of file migration in a UNIX supercomputing environment

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1992-01-01

    The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.

  3. Effect of binary organic solvents together with emulsifier on particle size and in vitro behavior of paclitaxel-encapsulated polymeric lipid nanoparticles.

    PubMed

    Qin, Shuzhi; Sun, Xiangshi; Li, Feng; Yu, Kongtong; Zhou, Yulin; Liu, Na; Zhao, Chengguo; Teng, Lesheng; Li, Youxin

    2017-12-21

    Biodegradable nanoparticles with diameters between 100 nm and 500 nm are of great interest in the contexts of targeted delivery. The present work provides a review concerning the effect of binary organic solvents together with emulsifier on particle size as well as the influence of particle size on the in vitro drug release and uptake behavior. The polymeric lipid nanoparticles (PLNs) with different particle sizes were prepared by using binary solvent dispersion method. Various formulation parameters such as binary organic solvent composition and emulsifier types were evaluated on the basis of their effects on particle size and size distribution. PLNs had a strong dependency on the surface tension, intrinsic viscosity and volatilization rate of binary organic solvents and the hydrophilicity/hydrophobicity of emulsifiers. Acetone-methanol system together with pluronic F68 as emulsifier was proved to obtain the smallest particle size. Then the PLNs with different particle sizes were used to investigate how particle size at nanoscale affects interacted with tumor cells. As particle size got smaller, cellular uptake increased in tumor cells and PLNs with particle size of ~120 nm had the highest cellular uptake and fastest release rate. The paclitaxel (PTX)-loaded PLNs showed a size-dependent inhibition of tumor cell growth, which was commonly influenced by cellular uptake and PTX release. The PLNs would provide a useful means to further elucidate roles of particle size on delivery system of hydrophobic drugs. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. 2014 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  5. 2015 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  6. Computation Directorate 2008 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less

  7. Towards Scalable Deep Learning via I/O Analysis and Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pumma, Sarunya; Si, Min; Feng, Wu-Chun

    Deep learning systems have been growing in prominence as a way to automatically characterize objects, trends, and anomalies. Given the importance of deep learning systems, researchers have been investigating techniques to optimize such systems. An area of particular interest has been using large supercomputing systems to quickly generate effective deep learning networks: a phase often referred to as “training” of the deep learning neural network. As we scale existing deep learning frameworks—such as Caffe—on these large supercomputing systems, we notice that the parallelism can help improve the computation tremendously, leaving data I/O as the major bottleneck limiting the overall systemmore » scalability. In this paper, we first present a detailed analysis of the performance bottlenecks of Caffe on large supercomputing systems. Our analysis shows that the I/O subsystem of Caffe—LMDB—relies on memory-mapped I/O to access its database, which can be highly inefficient on large-scale systems because of its interaction with the process scheduling system and the network-based parallel filesystem. Based on this analysis, we then present LMDBIO, our optimized I/O plugin for Caffe that takes into account the data access pattern of Caffe in order to vastly improve I/O performance. Our experimental results show that LMDBIO can improve the overall execution time of Caffe by nearly 20-fold in some cases.« less

  8. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level ofmore » treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"« less

  9. Understanding Lustre Internals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Feiyi; Oral, H Sarp; Shipman, Galen M

    2009-04-01

    Lustre was initiated and funded, almost a decade ago, by the U.S. Department of Energy (DoE) Office of Science and National Nuclear Security Administration laboratories to address the need for an open source, highly-scalable, high-performance parallel filesystem on by then present and future supercomputing platforms. Throughout the last decade, it was deployed over numerous medium-to-large-scale supercomputing platforms and clusters, and it performed and met the expectations of the Lustre user community. As it stands at the time of writing this document, according to the Top500 list, 15 of the top 30 supercomputers in the world use Lustre filesystem. This reportmore » aims to present a streamlined overview on how Lustre works internally at reasonable details including relevant data structures, APIs, protocols and algorithms involved for Lustre version 1.6 source code base. More importantly, it tries to explain how various components interconnect with each other and function as a system. Portions of this report are based on discussions with Oak Ridge National Laboratory Lustre Center of Excellence team members and portions of it are based on our own understanding of how the code works. We, as the authors team bare all responsibilities for all errors and omissions in this document. We can only hope it helps current and future Lustre users and Lustre code developers as much as it helped us understanding the Lustre source code and its internal workings.« less

  10. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    NASA Astrophysics Data System (ADS)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  11. The use of supercomputer modelling of high-temperature failure in pipe weldments to optimize weld and heat affected zone materials property selection

    NASA Astrophysics Data System (ADS)

    Wang, Z. P.; Hayhurst, D. R.

    1994-07-01

    The creep deformation and damage evolution in a pipe weldment has been modeled by using the finite-element continuum damage mechanics (CDM) method. The finite-element CDM computer program DAMAGE XX has been adapted to run with increased speed on a Cray XMP/416 supercomputer. Run times are sufficiently short (20 min) to permit many parametric studies to be carried out on vessel lifetimes for different weld and heat affected zone (HAZ) materials. Finite-element mesh sensitivity was studied first in order to select a mesh capable of correctly predicting experimentally observed results using at least possible computer time. A study was then made of the effect on the lifetime of a butt welded vessel of each of the commomly measured material parameters for the weld and HAZ materials. Forty different ferritic steel welded vessels were analyzed for a constant internal pressure of 45.5 MPa at a temperature of 565 C; each vessel having the same parent pipe material but different weld and HAZ materials. A lifetime improvement has been demonstrated of 30% over that obtained for the initial materials property data. A methodology for weldment design has been established which uses supercomputer-based CDM analysis techniques; it is quick to use, provides accurate results, and is a viable design tool.

  12. Three Smoking Guns Prove Falsity of Green house Warming

    NASA Astrophysics Data System (ADS)

    Fong, P.

    2001-12-01

    Three observed facts: 1, the cloud coverage increased 4.1% in 50 years; 2. the precipitation increased 7.8% in 100 years; 3. the two rates are the same. {Interpretation}. 1, By the increased albedo of the clouds heat dissipation is increased 3.98 W/m2 by 2XCO2 time, canceling out greenhouse warming of 4 W/m{2}. Thus no global warming. 2, The precipitation increase show the increased release of latent heat of vaporization, which turns out to be equal to that absorbed by ocean due to increased evaporation by the greenhouse forcing. This all greenhouse heat is used up in evaporation and the warming of the earth is zero. 3, The identity of the two rates double-checked the two independent proofs. Therefore experimentally no greenhouse warming is triply proved. A new branch of science Pleistocene Climatology is developed to study the theoretical origin of no greenhouse warming. Climatology, like mechanics of a large number of particles, is of course complex and unwieldy. If totally order-less then there is no hope. However, if some regularity appears, then a systematic treatment can be done to simplify the complexity. The rigid bodies are subjected to a special simplifying condition (the distances between all particles are constant) and only 6 degrees of freedom are significant, all others are sidetracked. To study the spinning top there is no need to study the dynamics of every particle of the top by Newton's laws through super-computer. It only needs to solve the Euler equations without computer. In climate study the use of super-computer to study all degrees of freedom of the climate is as untenable as the study of the spinning top by super-computer. Yet in spite of the complexity there is strict regularity as seen in the ice ages, which works as the simplifying conditions to establish a new science Pleistocene climatology. See my book Greenhouse Warming and Nuclear Hazards just published (www.PeterFongBook.com). This time the special condition is the presence of a permanent body of ice (thus Pleistocene), and the existence of two thermostats, the polar ice and the clouds, with the specific simplifying condition being the neutral equilibrium condition of phase transition of ice and water. As Boltzmann has done, the equilibrium condition staffs off all trivial degrees of freedom an simplifies the problem. Indeed it is the equilibrium condition that determines no greenhouse warming. The very fact that in the past century no decent theory of ice ages has been developed means that the climate study has missed the essential point(like the Euler equations for the spinning top). The greenhouse warming theory is now worked out as a special case (pp. 145-179) of the ice age theory (pp.113-144) in a canonical formulation that distinguishes itself from all makeshift theories. On neutral equilibrium of phase transition: 1. No restoring force so that a small forcing can drive a large change, such as the ice age. 2,The temperature is always constant, the origin of thermostat, the basis of no global warming. Then why is the earth not at 100oC? New Idea. Cloud is the fourth phase of water, lowering the ``boiling point" to the dew point of the cloud (pp.145-179). What if the cloud covers the whole sky, then the dreaded global warming will commence in earnest? But this will happen 2000 years later yet the fossil fuels will be gone in 300 years. Phase transition is a chemical equilibrium, not in the general circulation model , which cannot solve climate problems with super-computer.

  13. BigData and computing challenges in high energy and nuclear physics

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  14. Synthesis, hydrolysis rates, supercomputer modeling, and antibacterial activity of bicyclic tetrahydropyridazinones.

    PubMed

    Jungheim, L N; Boyd, D B; Indelicato, J M; Pasini, C E; Preston, D A; Alborn, W E

    1991-05-01

    Bicyclic tetrahydropyridazinones, such as 13, where X are strongly electron-withdrawing groups, were synthesized to investigate their antibacterial activity. These delta-lactams are homologues of bicyclic pyrazolidinones 15, which were the first non-beta-lactam containing compounds reported to bind to penicillin-binding proteins (PBPs). The delta-lactam compounds exhibit poor antibacterial activity despite having reactivity comparable to the gamma-lactams. Molecular modeling based on semiempirical molecular orbital calculations on a Cray X-MP supercomputer, predicted that the reason for the inactivity is steric bulk hindering high affinity of the compounds to PBPs, as well as high conformational flexibility of the tetrahydropyridazinone ring hampering effective alignment of the molecule in the active site. Subsequent PBP binding experiments confirmed that this class of compound does not bind to PBPs.

  15. First-principles quantum-mechanical investigations: The role of water in catalytic conversion of furfural on Pd(111)

    NASA Astrophysics Data System (ADS)

    Xue, Wenhua; Borja, Miguel Gonzalez; Resasco, Daniel E.; Wang, Sanwu

    2015-03-01

    In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of water has attracted wide attention. Recent experiments showed that the proportion of alcohol product from catalytic reactions of furfural conversion with palladium in the presence of water is significantly increased, when compared with other solvent including dioxane, decalin, and ethanol. We investigated the microscopic mechanism of the reactions based on first-principles quantum-mechanical calculations. We particularly identified the important role of water and the liquid/solid interface in furfural conversion. Our results provide atomic-scale details for the catalytic reactions. Supported by DOE (DE-SC0004600). This research used the supercomputer resources at NERSC, of XSEDE, at TACC, and at the Tandy Supercomputing Center.

  16. Survey of new vector computers: The CRAY 1S from CRAY research; the CYBER 205 from CDC and the parallel computer from ICL - architecture and programming

    NASA Technical Reports Server (NTRS)

    Gentzsch, W.

    1982-01-01

    Problems which can arise with vector and parallel computers are discussed in a user oriented context. Emphasis is placed on the algorithms used and the programming techniques adopted. Three recently developed supercomputers are examined and typical application examples are given in CRAY FORTRAN, CYBER 205 FORTRAN and DAP (distributed array processor) FORTRAN. The systems performance is compared. The addition of parts of two N x N arrays is considered. The influence of the architecture on the algorithms and programming language is demonstrated. Numerical analysis of magnetohydrodynamic differential equations by an explicit difference method is illustrated, showing very good results for all three systems. The prognosis for supercomputer development is assessed.

  17. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  18. FAST: A multi-processed environment for visualization of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin

    1991-01-01

    Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed.

  19. Vectorized program architectures for supercomputer-aided circuit design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzoli, V.; Ferlito, M.; Neri, A.

    1986-01-01

    Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less

  20. The BlueGene/L supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanota, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2003-05-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32×32×64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  1. CFD Research, Parallel Computation and Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Ryan, James S.

    1995-01-01

    During the last five years, CFD has matured substantially. Pure CFD research remains to be done, but much of the focus has shifted to integration of CFD into the design process. The work under these cooperative agreements reflects this trend. The recent work, and work which is planned, is designed to enhance the competitiveness of the US aerospace industry. CFD and optimization approaches are being developed and tested, so that the industry can better choose which methods to adopt in their design processes. The range of computer architectures has been dramatically broadened, as the assumption that only huge vector supercomputers could be useful has faded. Today, researchers and industry can trade off time, cost, and availability, choosing vector supercomputers, scalable parallel architectures, networked workstations, or heterogenous combinations of these to complete required computations efficiently.

  2. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    NASA Technical Reports Server (NTRS)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  3. A Look at the Impact of High-End Computing Technologies on NASA Missions

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart

    2012-01-01

    From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.

  4. The Relative Importance of the Vadose Zone in Multimedia Risk Assessment Modeling Applied at a National Scale: An Analysis of Benzene Using 3MRA

    NASA Astrophysics Data System (ADS)

    Babendreier, J. E.

    2002-05-01

    Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidable one, particularly in regulatory settings applied on a national scale. Quantitative assessment of uncertainty and sensitivity within integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a systematic, comparative approach coupled with sufficient computational power. The Multimedia, Multipathway, and Multireceptor Risk Assessment Model (3MRA) is an important code being developed by the United States Environmental Protection Agency for use in site-scale risk assessment (e.g. hazardous waste management facilities). The model currently entails over 700 variables, 185 of which are explicitly stochastic. The 3MRA can start with a chemical concentration in a waste management unit (WMU). It estimates the release and transport of the chemical throughout the environment, and predicts associated exposure and risk. The 3MRA simulates multimedia (air, water, soil, sediments), pollutant fate and transport, multipathway exposure routes (food ingestion, water ingestion, soil ingestion, air inhalation, etc.), multireceptor exposures (resident, gardener, farmer, fisher, ecological habitats and populations), and resulting risk (human cancer and non-cancer effects, ecological population and community effects). The 3MRA collates the output for an overall national risk assessment, offering a probabilistic strategy as a basis for regulatory decisions. To facilitate model execution of 3MRA for purposes of conducting uncertainty and sensitivity analysis, a PC-based supercomputer cluster was constructed. Design of SuperMUSE, a 125 GHz Windows-based Supercomputer for Model Uncertainty and Sensitivity Evaluation is described, along with the conceptual layout of an accompanying java-based paralleling software toolset. Preliminary work is also reported for a scenario involving Benzene disposal that describes the relative importance of the vadose zone in driving risk levels for ecological receptors and human health. Incorporating landfills, waste piles, aerated tanks, surface impoundments, and land application units, the site-based data used in the analysis included 201 national facilities representing 419 site-WMU combinations.

  5. A report documenting the completion of the Los Alamos National Laboratory portion of the ASC level II milestone ""Visualization on the supercomputing platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Patchett, John M; Lo, Li - Ta

    2011-01-24

    This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. Formore » terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider using CPU-based rendering solutions when it is appropriate. For example, on remote supercomputers CPU-based rendering can offer a means of viewing data without having to offload the data or geometry onto a CPU-based visualization system. In terms of comparative performance of the CPU and CPU we believe that further optimizations of the performance of both CPU or CPU-based rendering are possible. The simulation community is currently confronting this reality as they work to port their simulations to different hardware architectures. What is interesting about CPU rendering of massive datasets is that for part two decades CPU performance has significantly outperformed CPU-based systems. Based on our advancements, evaluations and explorations we believe that CPU-based rendering has returned as one viable option for the visualization of massive datasets.« less

  6. Observational Properties of Coronal Mass Ejections

    DTIC Science & Technology

    2006-01-01

    speeds 2.5. Masses and Energies of CMEs exceeded 2000 km s-1; the fastest CME speed measured thus far was 2657 km s-1 on 4 November 2000. When compiled The...accelerated. The average deceleration of the fastest (> 900 km s-1) The CME kinetic energies can also be calculated from the CME group is -16 m s-2...OBSERVATIONAL PROPERTIES OF CORONAL MASS EJECTIONS 15 *"...... .. ’..’... ... ’...... kinetic energy is 2.4 x 1030 ergs (5.0 x 1029 ergs) [Vourlidas, 2004

  7. Kinetic analysis of an anion exchange absorbent for CO2 capture from ambient air.

    PubMed

    Shi, Xiaoyang; Li, Qibin; Wang, Tao; Lackner, Klaus S

    2017-01-01

    This study reports a preparation method of a new moisture swing sorbent for CO2 capture from air. The new sorbent components include ion exchange resin (IER) and polyvinyl chloride (PVC) as a binder. The IER can absorb CO2 when surrounding is dry and release CO2 when surrounding is wet. The manuscript presents the studies of membrane structure, kinetic model of absorption process, performance of desorption process and the diffusivity of water molecules in the CO2 absorbent. It has been proved that the kinetic performance of CO2 absorption/desorption can be improved by using thin binder and hot water treatment. The fast kinetics of P-100-90C absorbent is due to the thin PVC binder, and high diffusion rate of H2O molecules in the sample. The impressive is this new CO2 absorbent has the fastest CO2 absorption rate among all absorbents which have been reported by other up-to-date literatures.

  8. Kinetic analysis of an anion exchange absorbent for CO2 capture from ambient air

    PubMed Central

    Shi, Xiaoyang; Li, Qibin; Lackner, Klaus S.

    2017-01-01

    This study reports a preparation method of a new moisture swing sorbent for CO2 capture from air. The new sorbent components include ion exchange resin (IER) and polyvinyl chloride (PVC) as a binder. The IER can absorb CO2 when surrounding is dry and release CO2 when surrounding is wet. The manuscript presents the studies of membrane structure, kinetic model of absorption process, performance of desorption process and the diffusivity of water molecules in the CO2 absorbent. It has been proved that the kinetic performance of CO2 absorption/desorption can be improved by using thin binder and hot water treatment. The fast kinetics of P-100-90C absorbent is due to the thin PVC binder, and high diffusion rate of H2O molecules in the sample. The impressive is this new CO2 absorbent has the fastest CO2 absorption rate among all absorbents which have been reported by other up-to-date literatures. PMID:28640914

  9. Cell studies of hybridized carbon nanofibers containing bioactive glass nanoparticles using bone mesenchymal stromal cells

    NASA Astrophysics Data System (ADS)

    Zhang, Xiu-Rui; Hu, Xiao-Qing; Jia, Xiao-Long; Yang, Li-Ka; Meng, Qing-Yang; Shi, Yuan-Yuan; Zhang, Zheng-Zheng; Cai, Qing; Ao, Yin-Fang; Yang, Xiao-Ping

    2016-12-01

    Bone regeneration required suitable scaffolding materials to support the proliferation and osteogenic differentiation of bone-related cells. In this study, a kind of hybridized nanofibrous scaffold material (CNF/BG) was prepared by incorporating bioactive glass (BG) nanoparticles into carbon nanofibers (CNF) via the combination of BG sol-gel and polyacrylonitrile (PAN) electrospinning, followed by carbonization. Three types (49 s, 68 s and 86 s) of BG nanoparticles were incorporated. To understand the mechanism of CNF/BG hybrids exerting osteogenic effects, bone marrow mesenchymal stromal cells (BMSCs) were cultured directly on these hybrids (contact culture) or cultured in transwell chambers in the presence of these materials (non-contact culture). The contributions of ion release and contact effect on cell proliferation and osteogenic differentiation were able to be correlated. It was found that the ionic dissolution products had limited effect on cell proliferation, while they were able to enhance osteogenic differentiation of BMSCs in comparison with pure CNF. Differently, the proliferation and osteogenic differentiation were both significantly promoted in the contact culture. In both cases, CNF/BG(68 s) showed the strongest ability in influencing cell behaviors due to its fastest release rate of soluble silicium-relating ions. The synergistic effect of CNF and BG would make CNF/BG hybrids promising substrates for bone repairing.

  10. Seed dormancy and germination of Halophila ovalis mediated by simulated seasonal temperature changes

    NASA Astrophysics Data System (ADS)

    Statton, John; Sellers, Robert; Dixon, Kingsley W.; Kilminster, Kieryn; Merritt, David J.; Kendrick, Gary A.

    2017-11-01

    The seagrass, Halophila ovalis plays an important ecological and sediment stability role in estuarine systems in Australia with the species in decline in many sites. Halophila ovalis is a facultative annual, relying mainly on recruitment from the sediment seed bank for the annual regeneration of meadows. Despite this, there is little understanding of seed dormancy releasing mechanisms and germination cues. Using H. ovalis seed from the warm temperate Swan River Estuary in Western Australia, the germination ecology of H. ovalis was investigated by simulating the natural seasonal variation in water temperatures. The proportion of germinating seeds was found to be significantly different among temperature treatments (p < 0.001). The treatment with the longest period of cold exposure at 15 °C followed by an increase in temperature to 20-25 °C (i.e. cold stratification) had the highest final mean germination of 32% and the fastest germination rate. Seeds exposed to constant mean winter temperatures of 15 °C had the slowest germination rate with less than two seeds germinating over 118 days. Thus temperature is a key germination cue for H. ovalis seeds and these data infer that cold stratification is an important dormancy releasing mechanism. This finding has implications for recruitment in facultative annual species like H. ovalis under global warming since the trend for increasing water temperatures in the region may limit seed-based recruitment in the future.

  11. Paclitaxel Drug-eluting Tracheal Stent Could Reduce Granulation Tissue Formation in a Canine Model

    PubMed Central

    Wang, Ting; Zhang, Jie; Wang, Juan; Pei, Ying-Hua; Qiu, Xiao-Jian; Wang, Yu-Ling

    2016-01-01

    Background: Currently available silicone and metallic stents for tracheal stenosis are associated with many problems. Granulation proliferation is one of the main complications. The present study aimed to evaluate the efficacy of paclitaxel drug-eluting tracheal stent in reducing granulation tissue formation in a canine model, as well as the pharmacokinetic features and safety profiles of the coated drug. Methods: Eight beagles were randomly divided into a control group (bare-metal stent group, n = 4) and an experimental group (paclitaxel-eluting stent group, n = 4). The observation period was 5 months. One beagle in both groups was sacrificed at the end of the 1st and 3rd months, respectively. The last two beagles in both groups were sacrificed at the end of 5th month. The proliferation of granulation tissue and changes in tracheal mucosa were compared between the two groups. Blood routine and liver and kidney function were monitored to evaluate the safety of the paclitaxel-eluting stent. The elution method and high-performance liquid chromatography were used to characterize the rate of in vivo release of paclitaxel from the stent. Results: Compared with the control group, the proliferation of granulation tissue in the experimental group was significantly reduced. The drug release of paclitaxel-eluting stent was the fastest in the 1st month after implantation (up to 70.9%). Then, the release slowed down gradually. By the 5th month, the release reached up to 98.5%. During the observation period, a high concentration of the drug in the trachea (in the stented and adjacent unstented areas) and lung tissue was not noted, and the blood test showed no side effect. Conclusions: The paclitaxel-eluting stent could safely reduce the granulation tissue formation after stent implantation in vivo, suggesting that the paclitaxel-eluting tracheal stent might be considered for potential use in humans in the future. PMID:27824004

  12. Maximal anaerobic power in Indian national hockey players.

    PubMed Central

    Bhanot, J. L.; Sidhu, L. S.

    1983-01-01

    Anaerobic power in relation to field position of 90 Indian hockey players has been studied. These players included 10 goalkeepers, 16 backs, 20 half-backs and 44 forwards. The goalkeepers possess maximum and forwards possess minimum anaerobic power while in vertical velocity, the former are the fastest and the latter are the slowest. In body weight the backs are heaviest followed by half-backs, goalkeepers and forwards. Among backs, the lefts are heavier, faster and have more anaerobic power than rights. In half-line players, the centre-half-backs are followed by left-half-backs and right-half-backs both in body weight and anaerobic power, while in vertical velocity, the left-half-backs are the fastest and centre-half-backs are the slowest. Among forwards, the centre-forwards are heaviest with maximum anaerobic power and are followed by inside-forwards and outside-forwards, whereas, in vertical velocity the inside-forwards are fastest followed by centre-forwards and outside-forwards. Images p34-a p34-b PMID:6850203

  13. What problem are you working on?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-11-21

    Superconductors, supercomputers, new materials, clean energy, big science - ORNL researchers' work is multidisciplinary and world-leading. Hear them explain it in their own words in this video first shown at UT-Battelle's 2013 Awards Night.

  14. ARC-2009-ACD09-0208-029

    NASA Image and Video Library

    2009-09-15

    Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall.

  15. What problem are you working on?

    ScienceCinema

    None

    2018-05-07

    Superconductors, supercomputers, new materials, clean energy, big science - ORNL researchers' work is multidisciplinary and world-leading. Hear them explain it in their own words in this video first shown at UT-Battelle's 2013 Awards Night.

  16. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  17. Accessing and visualizing scientific spatiotemporal data

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Bergou, Attila; Berriman, G. Bruce; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia; hide

    2004-01-01

    This paper discusses work done by JPL's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids.

  18. The QCDOC Project

    NASA Astrophysics Data System (ADS)

    Boyle, P.; Chen, D.; Christ, N.; Clark, M.; Cohen, S.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Li, S.; Lin, H.; Mawhinney, R.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2005-03-01

    The QCDOC project has developed a supercomputer optimised for the needs of Lattice QCD simulations. It provides a very competitive price to sustained performance ratio of around $1 USD per sustained Megaflop/s in combination with outstanding scalability. Thus very large systems delivering over 5 TFlop/s of performance on the evolution of a single lattice is possible. Large prototypes have been built and are functioning correctly. The software environment raises the state of the art in such custom supercomputers. It is based on a lean custom node operating system that eliminates many unnecessary overheads that plague other systems. Despite the custom nature, the operating system implements a standards compliant UNIX-like programming environment easing the porting of software from other systems. The SciDAC QMP interface adds internode communication in a fashion that provides a uniform cross-platform programming environment.

  19. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  20. Supercomputing resources empowering superstack with interactive and integrated systems

    NASA Astrophysics Data System (ADS)

    Rückemann, Claus-Peter

    2012-09-01

    This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.

  1. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  2. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  3. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  4. Application-level regression testing framework using Jenkins

    DOE PAGES

    Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen

    2017-09-26

    Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less

  5. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  6. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  7. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  8. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  9. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Tu, Eugene L.; Van Dalsem, William R.

    2006-01-01

    Two years ago, NASA was on the verge of dramatically increasing its HEC capability and capacity. With the 10,240-processor supercomputer, Columbia, now in production for 18 months, HEC has an even greater impact within the Agency and extending to partner institutions. Advanced science and engineering simulations in space exploration, shuttle operations, Earth sciences, and fundamental aeronautics research are occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. This talk describes how the integrated production environment fostered at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center is accelerating scientific discovery, achieving parametric analyses of multiple scenarios, and enhancing safety for NASA missions. We focus on Columbia s impact on two key engineering and science disciplines: Aerospace, and Climate. We also discuss future mission challenges and plans for NASA s next-generation HEC environment.

  10. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  11. An efficient parallel algorithm for matrix-vector multiplication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less

  12. Application-level regression testing framework using Jenkins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen

    Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less

  13. Supercomputing in the Age of Discovering Superearths, Earths and Exoplanet Systems

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    2015-01-01

    NASA's Kepler Mission was launched in March 2009 as NASA's first mission capable of finding Earth-size planets orbiting in the habitable zone of Sun-like stars, that range of distances for which liquid water would pool on the surface of a rocky planet. Kepler has discovered over 1000 planets and over 4600 candidates, many of them as small as the Earth. Today, Kepler's amazing success seems to be a fait accompli to those unfamiliar with her history. But twenty years ago, there were no planets known outside our solar system, and few people believed it was possible to detect tiny Earth-size planets orbiting other stars. Motivating NASA to select Kepler for launch required a confluence of the right detector technology, advances in signal processing and algorithms, and the power of supercomputing.

  14. Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2011-01-01

    Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.

  15. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  16. Group-based variant calling leveraging next-generation supercomputing for large-scale whole-genome sequencing studies.

    PubMed

    Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J

    2015-09-22

    Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.

  17. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  18. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  19. Emulsions and rectal formulations containing myrrh essential oil for better patient compliance.

    PubMed

    Etman, M; Amin, M; Nada, A H; Shams-Eldin, M; Salama, O

    2011-06-01

    Myrrh has long been used for its circulatory, disinfectant, analgesic, antirheumatic, antidiabetic, and schistosomicidal properties. Myrrh essential oil (MEO) was extracted from the oleo-gum resin of Commiphora molmol and formulated into emulsions and suppositories to mask/avoid its bitter taste. Three oil-in-water emulsions (E1-E3) were formulated and taste was evaluated by 10 volunteers. Particle size distribution was measured and correlated with excipients and the method of preparation. Physical and chemical stability testing was carried out for the optimum formulation (E2). Seven suppository formulations were investigated (F1-F7). Suppocire AML (F1) and Suppocire CM (F2) were chosen as fatty bases, and polyethylene glycol (PEG) 1500 (F3), PEG 4000 (F4), and a PEG blend (50% PEG 6000 + 30% PEG 1500 + 20% PEG 400) (F5) were chosen as water-soluble bases. A blend of PEG 1500 and Suppocire CM was also used (F7). Camphor (5%) was added to PEG 1500 (F6). Disintegration time, release rate, DSC, fracture points, and weight uniformity were evaluated. The overall average bitterness for formulations E1, E2, and E3 was 6.44, 4.15, and 3.45, respectively. Suppositories containing Suppocire AML had the fastest disintegration time (1.5 min) with dissolution efficiency (DE) of 56.8%. F3 containing PEG 1500 had a fast disintegration time of 2.5 min and maximum DE of 93.5%. The PEG blend had satisfactory release: (DE = 90.9%). A mixed fatty and water-soluble base (F7) had a disintegration time of 5 min and low DE (33.4%). A stable MEO emulsion with acceptable taste was formulated to improve patient acceptance and compliance. F3 suppositories yielded satisfactory results, while formulations containing fatsoluble bases exhibited poor release.

  20. ARC-2009-ACD09-0208-023

    NASA Image and Video Library

    2009-09-15

    Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall by Chris Kemp.

  1. San Diego Supercomputer Center

    Science.gov Websites

    Nile and Zika virusLearn More image Variants in Non-Coding DNA Contribute to Inherited Autism RiskGene mutations appearing for the first time contribute to approximately one-third of cases of autism spectrum

  2. Impacts | Computational Science | NREL

    Science.gov Websites

    Impacts Impacts Read about the impacts of NREL's innovations in computational science. Awards community. Photo of the Peregrine supercomputer 2014 R&D 100 Award and R&D Magazine Editor's Choice

  3. LANL Studies Earth's Magnetosphere

    ScienceCinema

    Daughton, Bill

    2018-02-13

    A new 3-D supercomputer model presents a new theory of how magnetic reconnection works in high-temperature plasmas. This Los Alamos National Laboratory research supports an upcoming NASA mission to study Earth's magnetosphere in greater detail than ever.

  4. Banging Galaxy Clusters: High Fidelity X-ray Temperature and Radio Maps to Probe the Physics of Merging Clusters

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Hallman, Eric J.; Alden, Brian; Datta, Abhirup; Rapetti, David

    2017-06-01

    We present early results from an X-ray/Radio study of a sample of merging galaxy clusters. Using a novel X-ray pipeline, we have generated high-fidelity temperature maps from existing long-integration Chandra data for a set of clusters including Abell 115, A520, and MACSJ0717.5+3745. Our pipeline, written in python and operating on the NASA ARC high performance supercomputer Pleiades, generates temperature maps with minimal user interaction. This code will be released, with full documentation, on GitHub in beta to the community later this year. We have identified a population of observable shocks in the X-ray data that allow us to characterize the merging activity. In addition, we have compared the X-ray emission and properties to the radio data from observations with the JVLA and GMRT. These merging clusters contain radio relics and/or radio halos in each case. These data products illuminate the merger process, and how the energy of the merger is dissipated into thermal and non-thermal forms. This research was supported by NASA ADAP grant NNX15AE17G.

  5. Montage Version 3.0

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia

    2006-01-01

    The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.

  6. Recent advances and future prospects for Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less

  7. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    PubMed

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Berkeley Lab - Science Video Glossary

    Science.gov Websites

    source neutrino astronomy protein crystallography quantum dot supercomputing supernova synchrotron universe neutrino astronomy supernova Earth Science atmospheric aerosols bioremediation carbon cycle nanotechnology neutrino neutrino astronomy O, P petabytes petaflop computing photon plasma plasmon protein

  9. NETL Research Technology

    ScienceCinema

    None

    2018-01-16

    NETL is committed to providing its researchers with the latest scientific equipment. This video highlights three technologies: the Beowulf Cluster supercomputer, the OASIS Surface Analytical and Imaging System, and the gas chromatograph-inductively coupled plasma-mass spectrometer, or GC-ICP-MS.

  10. Real World Uses For Nagios APIs

    NASA Technical Reports Server (NTRS)

    Singh, Janice

    2014-01-01

    This presentation describes the Nagios 4 APIs and how the NASA Advanced Supercomputing at Ames Research Center is employing them to upgrade its graphical status display (the HUD) and explain why it's worth trying to use them yourselves.

  11. Performance Evaluation of Parallel Branch and Bound Search with the Intel iPSC (Intel Personal SuperComputer) Hypercube Computer.

    DTIC Science & Technology

    1986-12-01

    17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design

  12. Science and Technology at Oak Ridge National Laboratory

    ScienceCinema

    Mason, Thomas

    2017-12-22

    ORNL Director Thom Mason explains the groundbreaking work in neutron sciences, supercomputing, clean energy, advanced materials, nuclear research, and global security taking place at the Department of Energy's Office of Science laboratory in Oak Ridge, TN.

  13. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  14. ARC-2012-ACD12-0022-003

    NASA Image and Video Library

    2012-02-02

    Kepler Program VIP's from left Jon Jenkins, Natalie Batalha, and Bill Borucki pointing at the NASA Ames Hyperwall in the NAS (NASA Advanced Supercomputing) facility filled with exo-planets discovered during Kepler Mission. Moffett Field, CA (for aviation week)

  15. Science & Technology Review June 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, L A

    2012-04-20

    This month's issue has the following articles: (1) A New Era in Climate System Analysis - Commentary by William H. Goldstein; (2) Seeking Clues to Climate Change - By comparing past climate records with results from computer simulations, Livermore scientists can better understand why Earth's climate has changed and how it might change in the future; (3) Finding and Fixing a Supercomputer's Faults - Livermore experts have developed innovative methods to detect hardware faults in supercomputers and help applications recover from errors that do occur; (4) Targeting Ignition - Enhancements to the cryogenic targets for National Ignition Facility experiments aremore » furthering work to achieve fusion ignition with energy gain; (5) Neural Implants Come of Age - A new generation of fully implantable, biocompatible neural prosthetics offers hope to patients with neurological impairment; and (6) Incubator Busy Growing Energy Technologies - Six collaborations with industrial partners are using the Laboratory's high-performance computing resources to find solutions to urgent energy-related problems.« less

  16. History of the numerical aerodynamic simulation program

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Ballhaus, William F., Jr.

    1987-01-01

    The Numerical Aerodynamic Simulation (NAS) program has reached a milestone with the completion of the initial operating configuration of the NAS Processing System Network. This achievement is the first major milestone in the continuing effort to provide a state-of-the-art supercomputer facility for the national aerospace community and to serve as a pathfinder for the development and use of future supercomputer systems. The underlying factors that motivated the initiation of the program are first identified and then discussed. These include the emergence and evolution of computational aerodynamics as a powerful new capability in aerodynamics research and development, the computer power required for advances in the discipline, the complementary nature of computation and wind tunnel testing, and the need for the government to play a pathfinding role in the development and use of large-scale scientific computing systems. Finally, the history of the NAS program is traced from its inception in 1975 to the present time.

  17. The PMS project: Poor man's supercomputer

    NASA Astrophysics Data System (ADS)

    Csikor, F.; Fodor, Z.; Hegedüs, P.; Horváth, V. K.; Katz, S. D.; Piróth, A.

    2001-02-01

    We briefly describe the Poor Man's Supercomputer (PMS) project carried out at Eötvös University, Budapest. The goal was to construct a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To this end we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of PMS includes 32 nodes (PMS1). The performance of PMS1 was tested by Lattice Gauge Theory simulations. Using pure SU(3) gauge theory or the bosonic part of the minimal supersymmetric extention of the standard model (MSSM) on PMS1 we obtained 3 / Mflops and 0.60 / Mflops price-to-sustained performance ratio for double and single precision operations, respectively. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  18. Very large scale wavefunction orthogonalization in Density Functional Theory electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Bekas, C.; Curioni, A.

    2010-06-01

    Enforcing the orthogonality of approximate wavefunctions becomes one of the dominant computational kernels in planewave based Density Functional Theory electronic structure calculations that involve thousands of atoms. In this context, algorithms that enjoy both excellent scalability and single processor performance properties are much needed. In this paper we present block versions of the Gram-Schmidt method and we show that they are excellent candidates for our purposes. We compare the new approach with the state of the art practice in planewave based calculations and find that it has much to offer, especially when applied on massively parallel supercomputers such as the IBM Blue Gene/P Supercomputer. The new method achieves excellent sustained performance that surpasses 73 TFLOPS (67% of peak) on 8 Blue Gene/P racks (32 768 compute cores), while it enables more than a two fold decrease in run time when compared with the best competing methodology.

  19. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  20. The computational future for climate and Earth system models: on the path to petaflop and beyond.

    PubMed

    Washington, Warren M; Buja, Lawrence; Craig, Anthony

    2009-03-13

    The development of the climate and Earth system models has had a long history, starting with the building of individual atmospheric, ocean, sea ice, land vegetation, biogeochemical, glacial and ecological model components. The early researchers were much aware of the long-term goal of building the Earth system models that would go beyond what is usually included in the climate models by adding interactive biogeochemical interactions. In the early days, the progress was limited by computer capability, as well as by our knowledge of the physical and chemical processes. Over the last few decades, there has been much improved knowledge, better observations for validation and more powerful supercomputer systems that are increasingly meeting the new challenges of comprehensive models. Some of the climate model history will be presented, along with some of the successes and difficulties encountered with present-day supercomputer systems.

  1. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  2. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  3. A secure file manager for UNIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure filemore » manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.« less

  4. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  5. Using a Cray Y-MP as an array processor for a RISC Workstation

    NASA Technical Reports Server (NTRS)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  6. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  7. Supercomputing 2002: NAS Demo Abstracts

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor)

    2002-01-01

    The hyperwall is a new concept in visual supercomputing, conceived and developed by the NAS Exploratory Computing Group. The hyperwall will allow simultaneous and coordinated visualization and interaction of an array of processes, such as a the computations of a parameter study or the parallel evolutions of a genetic algorithm population. Making over 65 million pixels available to the user, the hyperwall will enable and elicit qualitatively new ways of leveraging computers to accomplish science. It is currently still unclear whether we will be able to transport the hyperwall to SC02. The crucial display frame still has not been completed by the metal fabrication shop, although they promised an August delivery. Also, we are still working the fragile node issue, which may require transplantation of the compute nodes from the present 2U cases into 3U cases. This modification will increase the present 3-rack configuration to 5 racks.

  8. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  9. Hybrid petacomputing meets cosmology: The Roadrunner Universe project

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Lukić, Zarija; Daniel, David; Fasel, Patricia; Desai, Nehal; Heitmann, Katrin; Hsu, Chung-Hsing; Ankeny, Lee; Mark, Graham; Bhattacharya, Suman; Ahrens, James

    2009-07-01

    The target of the Roadrunner Universe project at Los Alamos National Laboratory is a set of very large cosmological N-body simulation runs on the hybrid supercomputer Roadrunner, the world's first petaflop platform. Roadrunner's architecture presents opportunities and difficulties characteristic of next-generation supercomputing. We describe a new code designed to optimize performance and scalability by explicitly matching the underlying algorithms to the machine architecture, and by using the physics of the problem as an essential aid in this process. While applications will differ in specific exploits, we believe that such a design process will become increasingly important in the future. The Roadrunner Universe project code, MC3 (Mesh-based Cosmology Code on the Cell), uses grid and direct particle methods to balance the capabilities of Roadrunner's conventional (Opteron) and accelerator (Cell BE) layers. Mirrored particle caches and spectral techniques are used to overcome communication bandwidth limitations and possible difficulties with complicated particle-grid interaction templates.

  10. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  11. Parallel computation in a three-dimensional elastic-plastic finite-element analysis

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Bigelow, C. A.; Newman, J. C., Jr.

    1992-01-01

    A CRAY parallel processing technique called autotasking was implemented in a three-dimensional elasto-plastic finite-element code. The technique was evaluated on two CRAY supercomputers, a CRAY 2 and a CRAY Y-MP. Autotasking was implemented in all major portions of the code, except the matrix equations solver. Compiler directives alone were not able to properly multitask the code; user-inserted directives were required to achieve better performance. It was noted that the connect time, rather than wall-clock time, was more appropriate to determine speedup in multiuser environments. For a typical example problem, a speedup of 2.1 (1.8 when the solution time was included) was achieved in a dedicated environment and 1.7 (1.6 with solution time) in a multiuser environment on a four-processor CRAY 2 supercomputer. The speedup on a three-processor CRAY Y-MP was about 2.4 (2.0 with solution time) in a multiuser environment.

  12. Direct numerical simulation of the laminar-turbulent transition at hypersonic flow speeds on a supercomputer

    NASA Astrophysics Data System (ADS)

    Egorov, I. V.; Novikov, A. V.; Fedorov, A. V.

    2017-08-01

    A method for direct numerical simulation of three-dimensional unsteady disturbances leading to a laminar-turbulent transition at hypersonic flow speeds is proposed. The simulation relies on solving the full three-dimensional unsteady Navier-Stokes equations. The computational technique is intended for multiprocessor supercomputers and is based on a fully implicit monotone approximation scheme and the Newton-Raphson method for solving systems of nonlinear difference equations. This approach is used to study the development of three-dimensional unstable disturbances in a flat-plate and compression-corner boundary layers in early laminar-turbulent transition stages at the free-stream Mach number M = 5.37. The three-dimensional disturbance field is visualized in order to reveal and discuss features of the instability development at the linear and nonlinear stages. The distribution of the skin friction coefficient is used to detect laminar and transient flow regimes and determine the onset of the laminar-turbulent transition.

  13. Sex differences in pacing during ‘Ultraman Hawaii’

    PubMed Central

    Nikolaidis, Pantelis T.

    2016-01-01

    Background To date, little is known for pacing in ultra-endurance athletes competing in a non-stop event and in a multi-stage event, and especially, about pacing in a multi-stage event with different disciplines during the stages. Therefore, the aim of the present study was to examine the effect of age, sex and calendar year on triathlon performance and variation of performance by events (i.e., swimming, cycling 1, cycling 2 and running) in ‘Ultraman Hawaii’ held between 1983 and 2015. Methods Within each sex, participants were grouped in quartiles (i.e., Q1, Q2, Q3 and Q4) with Q1 being the fastest (i.e., lowest overall time) and Q4 the slowest (i.e., highest overall time). To compare performance among events (i.e., swimming, cycling 1, cycling 2 and running), race time in each event was converted in z score and this value was used for further analysis. Results A between-within subjects ANOVA showed a large sex × event (p = 0.015, η2 = 0.014) and a medium performance group × event interaction (p = 0.001, η2 = 0.012). No main effect of event on performance was observed (p = 0.174, η2 = 0.007). With regard to the sex × event interaction, three female performance groups (i.e., Q2, Q3 and Q4) increased race time from swimming to cycling 1, whereas only one male performance group (Q4) revealed a similar trend. From cycling 1 to cycling 2, the two slower female groups (Q3 and Q4) and the slowest male group (Q4) increased raced time. In women, the fastest group decreased (i.e., improved) race time from swimming to cycling 1 and thereafter, maintained performance, whereas in men, the fastest group decreased race time till cycling 2 and increased it in the running. Conclusion In summary, women pace differently than men during ‘Ultraman Hawaii’ where the fastest women decreased performance on day 1 and could then maintain on day 2 and 3, whereas the fastest men worsened performance on day 1 and 2 but improved on day 3. PMID:27703854

  14. Analysis of performance and age of the fastest 100-mile ultra-marathoners worldwide.

    PubMed

    Rüst, Christoph Alexander; Knechtle, Beat; Rosemann, Thomas; Lepers, Romuald

    2013-05-01

    The performance and age of peak ultra-endurance performance have been investigated in single races and single race series but not using worldwide participation data. The purpose of this study was to examine the changes in running performance and the age of peak running performance of the best 100-mile ultra-marathoners worldwide. The race times and ages of the annual ten fastest women and men were analyzed among a total of 35,956 finishes (6,862 for women and 29,094 for men) competing between 1998 and 2011 in 100-mile ultra-marathons. The annual top ten performances improved by 13.7% from 1,132±61.8 min in 1998 to 977.6±77.1 min in 2011 for women and by 14.5% from 959.2±36.4 min in 1998 to 820.6±25.7 min in 2011 for men. The mean ages of the annual top ten fastest runners were 39.2±6.2 years for women and 37.2±6.1 years for men. The age of peak running performance was not different between women and men (p>0.05) and showed no changes across the years. These findings indicated that the fastest female and male 100-mile ultra-marathoners improved their race time by ∼14% across the 1998-2011 period at an age when they had to be classified as master athletes. Future studies should analyze longer running distances (>200 km) to investigate whether the age of peak performance increases with increased distance in ultra-marathon running.

  15. On the Implications of a Sex Difference in the Reaction Times of Sprinters at the Beijing Olympics

    PubMed Central

    Lipps, David B.; Galecki, Andrzej T.; Ashton-Miller, James A.

    2011-01-01

    Elite sprinters offer insights into the fastest whole body auditory reaction times. When, however, is a reaction so fast that it represents a false start? Currently, a false start is awarded if an athlete increases the force on their starting block above a given threshold before 100 ms has elapsed after the starting gun. To test the hypothesis that the fastest valid reaction times of sprinters really is 100 ms and that no sex difference exists in that time, we analyzed the fastest reaction times achieved by each of the 425 male and female sprinters who competed at the 2008 Beijing Olympics. After power transformation of the skewed data, a fixed effects ANOVA was used to analyze the effects of sex, race, round and lane position. The lower bounds of the 95, 99 and 99.9% confidence intervals were then calculated and back transformed. The mean fastest reaction time recorded by men was significantly faster than women (p<0.001). At the 99.9% confidence level, neither men nor women can react in 100 ms, but they can react in as little as 109 ms and 121 ms, respectively. However, that sex difference in reaction time is likely an artifact caused by using the same force threshold in women as men, and it permits a woman to false start by up to 21 ms without penalty. We estimate that female sprinters would have similar reaction times to male sprinters if the force threshold used at Beijing was lowered by 22% in order to account for their lesser muscle strength. PMID:22039438

  16. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  17. Topical perspective on massive threading and parallelism.

    PubMed

    Farber, Robert M

    2011-09-01

    Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.

  18. The feasibility of an efficient drug design method with high-performance computers.

    PubMed

    Yamashita, Takefumi; Ueda, Akihiko; Mitsui, Takashi; Tomonaga, Atsushi; Matsumoto, Shunji; Kodama, Tatsuhiko; Fujitani, Hideaki

    2015-01-01

    In this study, we propose a supercomputer-assisted drug design approach involving all-atom molecular dynamics (MD)-based binding free energy prediction after the traditional design/selection step. Because this prediction is more accurate than the empirical binding affinity scoring of the traditional approach, the compounds selected by the MD-based prediction should be better drug candidates. In this study, we discuss the applicability of the new approach using two examples. Although the MD-based binding free energy prediction has a huge computational cost, it is feasible with the latest 10 petaflop-scale computer. The supercomputer-assisted drug design approach also involves two important feedback procedures: The first feedback is generated from the MD-based binding free energy prediction step to the drug design step. While the experimental feedback usually provides binding affinities of tens of compounds at one time, the supercomputer allows us to simultaneously obtain the binding free energies of hundreds of compounds. Because the number of calculated binding free energies is sufficiently large, the compounds can be classified into different categories whose properties will aid in the design of the next generation of drug candidates. The second feedback, which occurs from the experiments to the MD simulations, is important to validate the simulation parameters. To demonstrate this, we compare the binding free energies calculated with various force fields to the experimental ones. The results indicate that the prediction will not be very successful, if we use an inaccurate force field. By improving/validating such simulation parameters, the next prediction can be made more accurate.

  19. ICON-MIC: Implementing a CPU/MIC Collaboration Parallel Framework for ICON on Tianhe-2 Supercomputer.

    PubMed

    Wang, Zihao; Chen, Yu; Zhang, Jingrong; Li, Lun; Wan, Xiaohua; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2018-03-01

    Electron tomography (ET) is an important technique for studying the three-dimensional structures of the biological ultrastructure. Recently, ET has reached sub-nanometer resolution for investigating the native and conformational dynamics of macromolecular complexes by combining with the sub-tomogram averaging approach. Due to the limited sampling angles, ET reconstruction typically suffers from the "missing wedge" problem. Using a validation procedure, iterative compressed-sensing optimized nonuniform fast Fourier transform (NUFFT) reconstruction (ICON) demonstrates its power in restoring validated missing information for a low-signal-to-noise ratio biological ET dataset. However, the huge computational demand has become a bottleneck for the application of ICON. In this work, we implemented a parallel acceleration technology ICON-many integrated core (MIC) on Xeon Phi cards to address the huge computational demand of ICON. During this step, we parallelize the element-wise matrix operations and use the efficient summation of a matrix to reduce the cost of matrix computation. We also developed parallel versions of NUFFT on MIC to achieve a high acceleration of ICON by using more efficient fast Fourier transform (FFT) calculation. We then proposed a hybrid task allocation strategy (two-level load balancing) to improve the overall performance of ICON-MIC by making full use of the idle resources on Tianhe-2 supercomputer. Experimental results using two different datasets show that ICON-MIC has high accuracy in biological specimens under different noise levels and a significant acceleration, up to 13.3 × , compared with the CPU version. Further, ICON-MIC has good scalability efficiency and overall performance on Tianhe-2 supercomputer.

  20. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

Top