Science.gov

Sample records for advanced supercomputing division

  1. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  2. Desktop supercomputers. Advance medical imaging.

    PubMed

    Frisiello, R S

    1991-02-01

    Medical imaging tools that radiologists as well as a wide range of clinicians and healthcare professionals have come to depend upon are emerging into the next phase of functionality. The strides being made in supercomputing technologies--including reduction of size and price--are pushing medical imaging to a new level of accuracy and functionality.

  3. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  4. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  5. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2016-07-12

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  6. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    SciTech Connect

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  7. Advanced concepts and missions division publications, 1971

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This report is part of a series of annual papers on Advanced Concepts and Missions Division (ACMD) publications. It contains a bibliography and corresponding abstract of all papers presented or published by personnel of ACMD during the calendar year 1971. Also included are abstracts of final reports ACMD contracted studies perfomed during this time period.

  8. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  9. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we

  10. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  11. An assessment of worldwide supercomputer usage

    SciTech Connect

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  12. Supercomputers and atomic physics data

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-01-01

    The advent of the supercomputer has dramatically increased the possibilities for generating and using massive amounts of detailed fine structure atomic physics data. Size, speed, and software have made calculations which were impossible just a few years ago into a reality. Further technological advances make future possibilities seem endless. The cornerstone atomic structure codes of R.D. Cowan have been adapted into a single code CATS for use on Los Alamos supercomputers. We provide a brief overview of the problem; and report a sample CATS calculation using configuration interaction to calculate collision and oscillator strengths for over 300,000 transitions in neutral nitrogen. We also discuss future supercomputer needs. 2 refs.

  13. Advances in the NASA Earth Science Division Applied Science Program

    NASA Astrophysics Data System (ADS)

    Friedl, L.; Bonniksen, C. K.; Escobar, V. M.

    2016-12-01

    The NASA Earth Science Division's Applied Science Program advances the understanding of and ability to used remote sensing data in support of socio-economic needs. The integration of socio-economic considerations in to NASA Earth Science projects has advanced significantly. The large variety of acquisition methods used has required innovative implementation options. The integration of application themes and the implementation of application science activities in flight project is continuing to evolve. The creation of the recently released Earth Science Division, Directive on Project Applications Program and the addition of an application science requirement in the recent EVM-2 solicitation document NASA's current intent. Continuing improvement in the Earth Science Applications Science Program are expected in the areas of thematic integration, Project Applications Program tailoring for Class D missions and transfer of knowledge between scientists and projects.

  14. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  15. Technology transfer in the NASA Ames Advanced Life Support Division

    NASA Technical Reports Server (NTRS)

    Connell, Kathleen; Schlater, Nelson; Bilardo, Vincent; Masson, Paul

    1992-01-01

    This paper summarizes a representative set of technology transfer activities which are currently underway in the Advanced Life Support Division of the Ames Research Center. Five specific NASA-funded research or technology development projects are synopsized that are resulting in transfer of technology in one or more of four main 'arenas:' (1) intra-NASA, (2) intra-Federal, (3) NASA - aerospace industry, and (4) aerospace industry - broader economy. Each project is summarized as a case history, specific issues are identified, and recommendations are formulated based on the lessons learned as a result of each project.

  16. Computational chemistry on Cray supercomputers

    SciTech Connect

    Freeman, A.J.

    1988-09-01

    The unique and significant scientific results possible from the (happy) union of advanced computational methods and algorithms (software) on (CRAY) supercomputers (hardware) are described using, as illustrative examples, the new high-temperature superconducting oxides and the high-temperature intermetallic alloys of importance for potential aerospace applications.

  17. Supercomputing in Aerospace

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Yee, Helen

    1987-01-01

    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.

  18. Emerging supercomputer architectures

    SciTech Connect

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  19. The use of supercomputers in stellar dynamics; Proceedings of the Workshop, Institute for Advanced Study, Princeton, NJ, June 2-4, 1986

    NASA Astrophysics Data System (ADS)

    Hut, Piet; McMillan, Stephen L. W.

    Various papers on the use of supercomputers in stellar dynamics are presented. Individual topics addressed include: dynamical evolution of globular clusters, disk galaxy dynamics on the computer, mathematical models of star cluster dynamics, models of hot stellar systems, supercomputers and large cosmological N-body simulations, the architecture of a homogeneous vector supercomputer, the BBN multiprocessors Butterfly and Monarch, the Connection Machine, a digital Orrery, and the outer solar system for 200 million years. Also considered are: application of smooth particle hydrodynamics theory to lunar origin, multiple mesh techniques for modeling interacting galaxies, numerical experiments on galactic halo formation, numerical integration using explicit Taylor series, multiple-mesh-particle scheme for N-body simulation, direct N-body simulation on supercomputers, vectorization of small-N integrators, N-body integrations using supercomputers, a gridless Fourier method, techniques and tricks for N-body computation.

  20. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  1. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  2. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  3. NSF Commits to Supercomputers.

    ERIC Educational Resources Information Center

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  4. Reversible logic for supercomputing.

    SciTech Connect

    DeBenedictis, Erik P.

    2005-05-01

    This paper is about making reversible logic a reality for supercomputing. Reversible logic offers a way to exceed certain basic limits on the performance of computers, yet a powerful case will have to be made to justify its substantial development expense. This paper explores the limits of current, irreversible logic for supercomputers, thus forming a threshold above which reversible logic is the only solution. Problems above this threshold are discussed, with the science and mitigation of global warming being discussed in detail. To further develop the idea of using reversible logic in supercomputing, a design for a 1 Zettaflops supercomputer as required for addressing global climate warming is presented. However, to create such a design requires deviations from the mainstream of both the software for climate simulation and research directions of reversible logic. These deviations provide direction on how to make reversible logic practical.

  5. Energy Efficient Supercomputing

    SciTech Connect

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  6. Supercomputing the Climate

    NASA Image and Video Library

    Goddard Space Flight Center is the home of a state-of-the-art supercomputing facility called the NASA Center for Climate Simulation (NCCS) that is capable of running highly complex models to help s...

  7. Energy Efficient Supercomputing

    ScienceCinema

    Anypas, Katie

    2016-07-12

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  8. Advances in nickel hydrogen technology at Yardney Battery Division

    NASA Technical Reports Server (NTRS)

    Bentley, J. G.; Hall, A. M.

    1987-01-01

    The current major activites in nickel hydrogen technology being addressed at Yardney Battery Division are outlined. Five basic topics are covered: an update on life cycle testing of ManTech 50 AH NiH2 cells in the LEO regime; an overview of the Air Force/industry briefing; nickel electrode process upgrading; 4.5 inch cell development; and bipolar NiH2 battery development.

  9. Parallel supercomputing: Advanced methods, algorithms and software for large-scale problems. Final report, August 1, 1987--July 31, 1994

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1994-12-31

    The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software for complex applications such as those in coupled fluid flow and heat transfer. The research has been directed principally toward the solution of large-scale PDE problems using iterative solvers for finite differences and finite elements on advanced computer architectures. This work embraces parallel domain decomposition, element-by-element, spectral, and multilevel schemes with adaptive parameter determination, rational iteration and related issues. In addition to the fundamental questions related to developing new methods and mapping these to parallel computers, there are important software issues. The group has played a significant role in the development of software both for iterative solvers and also for finite element codes. The research in computational fluid dynamics (CFD) led to sustained multi-Gigaflop performance rates for parallel-vector computations of realistic large scale applications (not computational kernels alone). The main application areas for these performance studies have been two-dimensional problems in CFD. Over the course of this DOE sponsored research significant progress has been made. A report of the progression of the research is given and at the end of the report is a list of related publications and presentations over the entire grant period.

  10. A training program for scientific supercomputing users

    SciTech Connect

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  11. Energy sciences supercomputing 1990

    SciTech Connect

    Mirin, A.A.; Kaiper, G.V.

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  12. Super problems for supercomputers

    SciTech Connect

    Peterson, I.

    1984-01-01

    This article discusses the ways in which simulations performed on high-speed computers combined with graphics are replacing experiments. Supercomputers ranging from the large, general-purpose Cray-1 and the CYBER 205 to machines designed for a specific type of calculation, are becoming essential research tools in many fields of science and engineering. Topics considered include crystal growth, aerodynamic design, molecular seismology, computer graphics, membrane design, quantum mechanical calculations, Soviet ''nuclear winter'' maps (modeling climate in a post-nuclear-war environment), and estimating nuclear forest fires. It is pointed out that the $15 million required to buy and support one supercomputer has limited its use in industry and universities.

  13. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  14. The Titan graphics supercomputer architecture

    SciTech Connect

    Diede, T.; Hagenmaier, C.F.; Miranker, G.S.; Rubinstein, J.J.; Worley, W.S. Jr. )

    1988-09-01

    Leading-edge hardware and software technologies now make possible a new class of system - the graphics supercomputer. Titan architecture provides a substantial fraction of supercomputer performance plus integrated high-quality graphics.

  15. Effective Use of Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Kramer, William T. C.; Craw, James M.

    1989-01-01

    The effective use of a supercomputer depends on many aspects, including the ability of users to write efficient programs to use the resources of the system in an optimal manner. However, it is the responsibility of the system managers of these systems to insure the maximum effectiveness of the overall system is achieved. Many varying techniques have been developed at the Numerical Aerodynamic Simulation (NAS) Program to advance the management of these critical systems. Many of the issues and techniques used for managing supercomputers are common to multi-user UNIX systems, regardless of the version of UNIX or the power of the hardware. However, a UNICOS supercomputer presents some special challenges and requires additional features and tools to be developed to effectively manage the system. Only part of the challenge is related to performance monitoring and improvement. Much of the responsibility of the system manager is to provide fair and consistent access to the system resources, which is at times a difficult problem. After an introduction to the environment at the Numerical Aerodynamic Simulation Project is given as background this paper will first discuss the areas which are common to UNDC system management. Then the paper will discuss the specific areas of UNICOS which must be used to operate the system efficiently. The paper goes on to discuss the methods of supporting individual users in order to increase their effectiveness and the efficiency of their programs. This is accomplished through a professional support staff who interact on a daily basis to support the NAS scientific client community.

  16. Effective Use of Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Kramer, William T. C.; Craw, James M.

    1989-01-01

    The effective use of a supercomputer depends on many aspects, including the ability of users to write efficient programs to use the resources of the system in an optimal manner. However, it is the responsibility of the system managers of these systems to insure the maximum effectiveness of the overall system is achieved. Many varying techniques have been developed at the Numerical Aerodynamic Simulation (NAS) Program to advance the management of these critical systems. Many of the issues and techniques used for managing supercomputers are common to multi-user UNIX systems, regardless of the version of UNIX or the power of the hardware. However, a UNICOS supercomputer presents some special challenges and requires additional features and tools to be developed to effectively manage the system. Only part of the challenge is related to performance monitoring and improvement. Much of the responsibility of the system manager is to provide fair and consistent access to the system resources, which is at times a difficult problem. After an introduction to the environment at the Numerical Aerodynamic Simulation Project is given as background this paper will first discuss the areas which are common to UNDC system management. Then the paper will discuss the specific areas of UNICOS which must be used to operate the system efficiently. The paper goes on to discuss the methods of supporting individual users in order to increase their effectiveness and the efficiency of their programs. This is accomplished through a professional support staff who interact on a daily basis to support the NAS scientific client community.

  17. Supercomputers: Super-polluters?

    SciTech Connect

    Mills, Evan; Mills, Evan; Tschudi, William; Shalf, John; Simon, Horst

    2008-04-08

    Thanks to imperatives for limiting waste heat, maximizing performance, and controlling operating cost, energy efficiency has been a driving force in the evolution of supercomputers. The challenge going forward will be to extend these gains to offset the steeply rising demands for computing services and performance.

  18. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  19. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2016-07-12

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  20. Computer Electromagnetics and Supercomputer Architecture

    NASA Technical Reports Server (NTRS)

    Cwik, Tom

    1993-01-01

    The dramatic increase in performance over the last decade for microporcessor computations is compared with that for the supercomputer computations. This performance, the projected performance, and a number of other issues such as cost and the inherent pysical limitations in curent supercomputer technology have naturally led to parallel supercomputers and ensemble of interconnected microprocessors.

  1. Comparison of stepwise vs single-step advancement with the Functional Mandibular Advancer in Class II division 1 treatment.

    PubMed

    Aras, Isil; Pasaoglu, Aylin; Olmez, Sultan; Unal, Idil; Tuncer, Ali Vehbi; Aras, Aynur

    2017-01-01

    To compare two groups of subjects at the peak of the pubertal growth period treated with the Functional Mandibular Advancer (FMA; Forestadent, Pforzheim, Germany) appliance using either single-step or stepwise mandibular advancement. This study was conducted on 34 Class II division 1 malocclusion subjects at or just before the peak phase of pubertal growth as assessed by hand-wrist radiographs. Subjects were assigned to two groups of mandibular advancement, using matched randomization. Both groups were treated with the FMA. While the mandible was advanced to a super Class I molar relation in the single-step advancement group (SSG), patients in the stepwise mandibular advancement group (SWG) had a 4-mm initial bite advancement and subsequent 2-mm advancements at bimonthly intervals. The material consisted of lateral cephalograms taken before treatment and after 10 months of FMA treatment. Data were analyzed by means paired t-tests and an independent t-test. There were statistically significant changes in SNB, Pg horizontal, ANB, Co-Gn, and Co-Go measurements in both groups (P < .001); these changes were greater in the SWG with the exception of Co-Go (P < .05). While significant differences were found in U1-SN, IMPA, L6 horizontal, overjet, and overbite appraisals in each group (P < .001), these changes were comparable (P > .05). Because of the higher rates of sagittal mandibular skeletal changes, FMA using stepwise advancement of the mandible might be the appliance of choice for treating Class II division 1 malocclusions.

  2. The airborne supercomputer

    NASA Astrophysics Data System (ADS)

    Rhea, John

    1990-05-01

    A new class of airborne supercomputer designated RH-32 is being developed at USAF research facilities, capable of performing the critical battle management function for any future antiballistic missile system that emerges from the SDI. This research is also aimed at applications for future tactical aircraft and retrofit into the supercomputers of the ATF. The computers are based on a system architecture known as multi-interlock pipe stages, developed by the DARPA. Fiber-optic data buses appear to be the only communications media that are likely to match the speed of the processors and they have the added advantage of being inherently radiation resistant. The RH-32 itself, being the product of a basic research effort, may never see operational use. However, the technologies that emerge from this major R&D program will set the standards for airborne computers well into the next century.

  3. Ice Storm Supercomputer

    ScienceCinema

    None

    2016-07-12

    "A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed 'Ice Storm,' this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen." For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  4. Ice Storm Supercomputer

    SciTech Connect

    2009-01-01

    "A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed 'Ice Storm,' this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen." For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  5. Predicting Hurricanes with Supercomputers

    SciTech Connect

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  6. What Is the Relationship between Emotional Intelligence and Administrative Advancement in an Urban School Division?

    ERIC Educational Resources Information Center

    Roberson, Elizabeth W.

    2010-01-01

    The purpose of this research was to study the relationship between emotional intelligence and administrative advancement in one urban school division; however, data acquired in the course of study may have revealed areas that could be further developed in future studies to increase the efficacy of principals and, perhaps, to inform the selection…

  7. Annotated Bibliography of the Advanced Systems Division Reports (1950-1972).

    ERIC Educational Resources Information Center

    Valverde, Horace H.; And Others

    The Advanced Systems Division of the Air Force Human Resources Laboratory, Air Force Systems Command conducts research and development in the areas of training techniques, psychological and engineering aspects of training equipment, and personnel and training factors in the design of new systems and equipment. This unclassified, unlimited…

  8. Advanced Reactor Safety Research Division. Quarterly progress report, January 1-March 31, 1980

    SciTech Connect

    Agrawal, A.K.; Cerbone, R.J.; Sastre, C.

    1980-06-01

    The Advanced Reactor Safety Research Programs quarterly progress report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: HTGR Safety Evaluation, SSC Code Development, LMFBR Safety Experiments, and Fast Reactor Safety Code Validation.

  9. What Is the Relationship between Emotional Intelligence and Administrative Advancement in an Urban School Division?

    ERIC Educational Resources Information Center

    Roberson, Elizabeth W.

    2010-01-01

    The purpose of this research was to study the relationship between emotional intelligence and administrative advancement in one urban school division; however, data acquired in the course of study may have revealed areas that could be further developed in future studies to increase the efficacy of principals and, perhaps, to inform the selection…

  10. Advanced Reactor Safety Research Division. Quarterly progress report, April 1-June 30, 1980

    SciTech Connect

    Romano, A.J.

    1980-01-01

    The Advanced Reactor Safety Research Programs Quarterly Progress Report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: HTGR safety evaluation, SSC Code Development, LMFBR Safety Experiments, and Fast Reactor Safety Code Validation.

  11. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1992-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  12. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1991-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  13. National Test Facility civilian agency use of supercomputers not feasible

    SciTech Connect

    1994-12-01

    Based on interviews with civilian agencies cited in the House report (DOE, DoEd, HHS, FEMA, NOAA), none would be able to make effective use of NTF`s excess supercomputing capabilities. These agencies stated they could not use the resources primarily because (1) NTF`s supercomputers are older machines whose performance and costs cannot match those of more advanced computers available from other sources and (2) some agencies have not yet developed applications requiring supercomputer capabilities or do not have funding to support such activities. In addition, future support for the hardware and software at NTF is uncertain, making any investment by an outside user risky.

  14. Beowulf Supercomputers: Scope and Trends

    NASA Astrophysics Data System (ADS)

    Ahmed, Maqsood; Saeed, M. Alam; Ahmed, Rashid; Fazal-e-Aleem

    2005-03-01

    As we have entered in twenty 1st century, a century of information technology, the need of supercomputing is expanding in many fields of science and technology. With the availability of low cost commodity hardware, free software, Beowulf style supercomputers have solved the problem of scientific community. The supercomputer helps in solving complex problems, store, process, and manage huge amount of scientific data available all over the globe. In this paper we have tried to discuss the functioning of Beowulf style supercomputer, its scope and future trends.

  15. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  16. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2016-07-12

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  17. Overview of the I-way : wide area visual supercomputing.

    SciTech Connect

    DeFanti, T. A.; Foster, I.; Papka, M. E.; Stevens, R.; Kuhfuss, T.; Univ. of Illinois at Chicago

    1996-01-01

    This paper discusses the I-WAY project and provides an overview of the papers in this issue of IJSA. The I-WAY is an experimental environment for building distributed virtual reality applications and for exploring issues of distributed wide area resource management and scheduling. The goal of the I-WAY project is to enable researchers use multiple internetworked supercomputers and advanced visualization systems to conduct very large-scale computations. By connecting a dozen ATM testbeds, seventeen supercomputer centers, five virtual reality research sites, and over sixty applications groups, the I-WAY project has created an extremely diverse wide area environment for exploring advanced applications. This environment has provided a glimpse of the future for advanced scientific and engineering computing. 1 A Model for Distributed Collaborative Computing The I-WAY, or Information Wide Area Year, was a year-long effort to link existing national testbeds based on ATM (asynchronous transfer mode) to interconnect supercomputer centers, virtual reality (VR) research locations, and applications development sites. The I-WAY was successfully demonstrated at Supercomputing '95 and included over sixty distributed supercomputing applications that used a variety of supercomputing resources and VR display.

  18. Distributed user services for supercomputers

    NASA Technical Reports Server (NTRS)

    Sowizral, Henry A.

    1989-01-01

    User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.

  19. Enabling department-scale supercomputing

    SciTech Connect

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  20. Naval Surface Warfare Center Dahlgren Division Technical Digest: Advanced materials technology

    NASA Astrophysics Data System (ADS)

    1993-09-01

    The Dahlgren Division conducts full-spectrum research, development, test and evaluation (RDT and E), and fleet support on advanced materials and materials processes for application to ordnance and weapon systems. We emphasize various core technologies such as advanced ceramics, warhead materials, electrochemistry, polymer science, acoustic materials, composites, magnetostrictive materials, semiconductor materials, thermal management materials, radiation sensor materials, energetic materials, biotechnology, surface science, and nondestructive evaluation. Spin-off technologies for dual use are also actively pursued. This issue of the Digest includes articles on engineered materials, energetic materials, and the characterization of materials.

  1. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  2. Supercomputing Sheds Light on the Dark Universe

    SciTech Connect

    Salman Habib

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named a finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.

  3. Sandia`s network for Supercomputer `96: Linking supercomputers in a wide area Asynchronous Transfer Mode (ATM) network

    SciTech Connect

    Pratt, T.J.; Martinez, L.G.; Vahle, M.O.

    1997-04-01

    The advanced networking department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past several years as a forum to demonstrate and focus communication and networking developments. At Supercomputing 96, for the first time, Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory combined their Supercomputing 96 activities within a single research booth under the ASO banner. Sandia provided the network design and coordinated the networking activities within the booth. At Supercomputing 96, Sandia elected: to demonstrate wide area network connected Massively Parallel Processors, to demonstrate the functionality and capability of Sandia`s new edge architecture, to demonstrate inter-continental collaboration tools, and to demonstrate ATM video capabilities. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia`s overall strategies in ATM networking.

  4. Supercomputer debugging workshop 1991 proceedings

    SciTech Connect

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  5. Supercomputer debugging workshop 1991 proceedings

    SciTech Connect

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  6. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  7. Advancing research and practice: the revised APA Division 30 definition of hypnosis.

    PubMed

    Elkins, Gary R; Barabasz, Arreed F; Council, James R; Spiegel, David

    2015-01-01

    This article describes the history, rationale, and guidelines for developing a new definition of hypnosis by the Society of Psychological Hypnosis, Division 30 of the American Psychological Association. The definition was developed with the aim of being concise, heuristic, and allowing for alternative theories of the mechanisms (to be determined in empirical scientific study). The definition of hypnosis is presented as well as definitions of the following related terms: hypnotic induction, hypnotizability, and hypnotherapy. The implications for advancing research and practice are discussed. The definitions are presented within the article.

  8. Advancing Research and Practice: The Revised APA Division 30 Definition of Hypnosis.

    PubMed

    Elkins, Gary R; Barabasz, Arreed F; Council, James R; Spiegel, David

    2015-04-01

    This article describes the history, rationale, and guidelines for developing a new definition of hypnosis by the Society of Psychological Hypnosis, Division 30 of the American Psychological Association. The definition was developed with the aim of being concise, being heuristic, and allowing for alternative theories of the mechanisms (to be determined in empirical scientific study). The definition of hypnosis is presented as well as definitions of the following related terms: hypnotic induction, hypnotizability, and hypnotherapy. The implications for advancing research and practice are discussed. The definitions are presented within the article.

  9. Storage needs in future supercomputer environments

    NASA Technical Reports Server (NTRS)

    Coleman, Sam

    1992-01-01

    The Lawrence Livermore National Laboratory (LLNL) is a Department of Energy contractor, managed by the University of California since 1952. Major projects at the Laboratory include the Strategic Defense Initiative, nuclear weapon design, magnetic and laser fusion, laser isotope separation, and weather modeling. The Laboratory employs about 8,000 people. There are two major computer centers: The Livermore Computer Center and the National Energy Research Supercomputer Center. As we increase the computing capacity of LLNL systems and develop new applications, the need for archival capacity will increase. Rather than quantify that increase, I will discuss the hardware and software architectures that we will need to support advanced applications.

  10. Storage needs in future supercomputer environments

    NASA Technical Reports Server (NTRS)

    Coleman, Sam

    1992-01-01

    The Lawrence Livermore National Laboratory (LLNL) is a Department of Energy contractor, managed by the University of California since 1952. Major projects at the Laboratory include the Strategic Defense Initiative, nuclear weapon design, magnetic and laser fusion, laser isotope separation, and weather modeling. The Laboratory employs about 8,000 people. There are two major computer centers: The Livermore Computer Center and the National Energy Research Supercomputer Center. As we increase the computing capacity of LLNL systems and develop new applications, the need for archival capacity will increase. Rather than quantify that increase, I will discuss the hardware and software architectures that we will need to support advanced applications.

  11. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  12. Microprocessors: from desktops to supercomputers.

    PubMed

    Baskett, F; Hennessy, J L

    1993-08-13

    Continuing improvements in integrated circuit technology and computer architecture have driven microprocessors to performance levels that rival those of supercomputers-at a fraction of the price. The use of sophisticated memory hierarchies enables microprocessor-based machines to have very large memories built from commodity dynamic random access memory while retaining the high bandwidth and low access time needed in a high-performance machine. Parallel processors composed of these high-performance microprocessors are becoming the supercomputing technology of choice for scientific and engineering applications. The challenges for these new supercomputers have been in developing multiprocessor architectures that are easy to program and that deliver high performance without extraordinary programming efforts by users. Recent progress in multiprocessor architecture has led to ways to meet these challenges.

  13. Mira: Argonne's 10-petaflops supercomputer

    SciTech Connect

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  14. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2016-07-12

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  15. AICD: Advanced Industrial Concepts Division Biological and Chemical Technologies Research Program

    NASA Astrophysics Data System (ADS)

    Petersen, G.; Bair, K.; Ross, J.

    1994-03-01

    The annual summary report presents the fiscal year (FY) 1993 research activities and accomplishments for the United States Department of Energy (DOE) Biological and Chemical Technologies Research (BCTR) Program of the Advanced Industrial Concepts Division (AICD). This AICD program resides within the Office of Industrial Technologies (OIT) of the Office of Energy Efficiency and Renewable Energy (EE). The annual summary report for 1993 (ASR 93) contains the following: A program description (including BCTR program mission statement, historical background, relevance, goals and objectives), program structure and organization, selected technical and programmatic highlights for 1993, detailed descriptions of individual projects, and a listing of program output including a bibliography of published work, patents, and awards arising from work supported by BCTR.

  16. AICD -- Advanced Industrial Concepts Division Biological and Chemical Technologies Research Program. 1993 Annual summary report

    SciTech Connect

    Petersen, G.; Bair, K.; Ross, J.

    1994-03-01

    The annual summary report presents the fiscal year (FY) 1993 research activities and accomplishments for the United States Department of Energy (DOE) Biological and Chemical Technologies Research (BCTR) Program of the Advanced Industrial Concepts Division (AICD). This AICD program resides within the Office of Industrial Technologies (OIT) of the Office of Energy Efficiency and Renewable Energy (EE). The annual summary report for 1993 (ASR 93) contains the following: A program description (including BCTR program mission statement, historical background, relevance, goals and objectives), program structure and organization, selected technical and programmatic highlights for 1993, detailed descriptions of individual projects, a listing of program output, including a bibliography of published work, patents, and awards arising from work supported by BCTR.

  17. Role of supercomputers in magnetic fusion and energy research programs

    SciTech Connect

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained.

  18. Improved Access to Supercomputers Boosts Chemical Applications.

    ERIC Educational Resources Information Center

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  19. Tutorial: Parallel Simulation on Supercomputers

    SciTech Connect

    Perumalla, Kalyan S

    2012-01-01

    This tutorial introduces typical hardware and software characteristics of extant and emerging supercomputing platforms, and presents issues and solutions in executing large-scale parallel discrete event simulation scenarios on such high performance computing systems. Covered topics include synchronization, model organization, example applications, and observed performance from illustrative large-scale runs.

  20. Computational plasma physics and supercomputers

    SciTech Connect

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics.

  1. Super technology for tomorrow's supercomputers

    SciTech Connect

    Steiner, L.K.; Tate, D.P.

    1982-01-01

    In the past, it has been possible to achieve significant performance improvements in large computers simply by using newer, faster, or higher density components. However, as the rate of component improvement has slowed, we are being forced to rely on system architectural change to gain performance improvement. The authors examine the technologies required to design more parallel processing features into future supercomputers. 3 references.

  2. Recent advances in the discovery and development of antibacterial agents targeting the cell-division protein FtsZ.

    PubMed

    Haranahalli, Krupanandan; Tong, Simon; Ojima, Iwao

    2016-12-15

    With the emergence of multidrug-resistant bacterial strains, there is a dire need for new drug targets for antibacterial drug discovery and development. Filamentous temperature sensitive protein Z (FtsZ), is a GTP-dependent prokaryotic cell division protein, sharing less than 10% sequence identity with the eukaryotic cell division protein, tubulin. FtsZ forms a dynamic Z-ring in the middle of the cell, leading to septation and subsequent cell division. Inhibition of the Z-ring blocks cell division, thus making FtsZ a highly attractive target. Various groups have been working on natural products and synthetic small molecules as inhibitors of FtsZ. This review summarizes the recent advances in the development of FtsZ inhibitors, focusing on those in the last 5years, but also includes significant findings in previous years. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  4. Putting the super in supercomputers

    SciTech Connect

    Schulbach, C.

    1985-08-01

    Computers used for numerical simulations of physical phenomena, e.g., flowfields, meteorology, structural analysis, etc., replace physical experiments that are too expensive or impossible to perform. The problems considered continually become increasingly more complex and thus demand faster processing times to do all necessary computations. The effects components technologies have on computer speed are leveling off, leaving new architectures and programming as the only currently viable means to upgrade speed. Parallel computations, either in the form of array processors, assembly line processing or multiprocessors are being explored using existing microprocessor technologies. Slower hardware configurations can also be made equivalent to faster supercomputers by economic programming. The availability of rudimentary parallel architecture supercomputers for general industrial use is increasing. Scientific applications continue to drive the development of more sophisticated parallel machines.

  5. A Long History of Supercomputing

    SciTech Connect

    Grider, Gary

    2016-11-16

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  6. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  7. TOP500 Supercomputers for June 2004

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  8. TOP500 Supercomputers for June 2005

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  9. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  10. Building black holes: supercomputer cinema.

    PubMed

    Shapiro, S L; Teukolsky, S A

    1988-07-22

    A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.

  11. A Long History of Supercomputing

    ScienceCinema

    Grider, Gary

    2016-11-30

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  12. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  13. NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations

    NASA Image and Video Library

    2016-11-23

    The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.

  14. Misleading performance in the supercomputing field

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1992-01-01

    The problems of misleading performance reporting and the evident lack of careful refereeing in the supercomputing field are discussed in detail. Included are some examples that have appeared in recently published scientific papers. Some guidelines for reporting performance are presented the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  15. Supercomputer networking for space science applications

    NASA Technical Reports Server (NTRS)

    Edelson, B. I.

    1992-01-01

    The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.

  16. 61 FR 41181 - Vector Supercomputers From Japan

    Federal Register 2010, 2011, 2012, 2013, 2014

    1996-08-07

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Vector Supercomputers From Japan AGENCY: United States International Trade Commission. ACTION..., by reason of imports from Japan of vector supercomputers that are alleged to be sold in the...

  17. Low Cost Supercomputer for Applications in Physics

    NASA Astrophysics Data System (ADS)

    Ahmed, Maqsood; Ahmed, Rashid; Saeed, M. Alam; Rashid, Haris; Fazal-e-Aleem

    2007-02-01

    Using parallel processing technique and commodity hardware, Beowulf supercomputers can be built at a much lower cost. Research organizations and educational institutions are using this technique to build their own high performance clusters. In this paper we discuss the architecture and design of Beowulf supercomputer and our own experience of building BURRAQ cluster.

  18. Supercomputer chemistry at the University of Minnesota

    SciTech Connect

    Almolof, J.; Truhlar, D.G.; Davis, H.T.; Jensen, K.F.; Tirrell, M.; Lybrand, T.

    1988-01-01

    The Minnesota Supercomputer Institute (MSI) is a multidisciplinary research program of the University of Minnesota. Supercomputer research at MSI is carried out using the three supercomputers at the Minnesota Supercomputer Center, which is a short walk or campus-bus ride from the heart of the Minneapolis campus. The supercomputers include a two-pipe, 8-megaword Control Data Corporation CYBER 205 with a VSOS virtual memory operating system, a one-processor, 16-megaword CRAY-2 with UNICOS UNIX operating system, and a four-processor, 256-megaword CRAY-2 also running UNICOS. The authors describe here a selection of the research projects carried out using these machines by researchers from the Departments of Chemistry, Chemical Engineering and Materials Science, and Medicinal Chemistry.

  19. Radioactive waste shipments to Hanford retrievable storage from Westinghouse Advanced Reactors and Nuclear Fuels Divisions, Cheswick, Pennsylvania

    SciTech Connect

    Duncan, D.; Pottmeyer, J.A.; Weyns, M.I.; Dicenso, K.D.; DeLorenzo, D.S.

    1994-04-01

    During the next two decades the transuranic (TRU) waste now stored in the burial trenches and storage facilities at the Hanford Sits in southeastern Washington State is to be retrieved, processed at the Waste Receiving and Processing Facility, and shipped to the Waste Isolation Pilot Plant (WIPP), near Carlsbad, New Mexico for final disposal. Approximately 5.7 percent of the TRU waste to be retrieved for shipment to WIPP was generated by the decontamination and decommissioning (D&D) of the Westinghouse Advanced Reactors Division (WARD) and the Westinghouse Nuclear Fuels Division (WNFD) in Cheswick, Pennsylvania and shipped to the Hanford Sits for storage. This report characterizes these radioactive solid wastes using process knowledge, existing records, and oral history interviews.

  20. What does Titan tell us about preparing for exascale supercomputers?

    NASA Astrophysics Data System (ADS)

    Wells, Jack

    2014-04-01

    Significant advances in computational astrophysics have occurred over the past half-decade with the appearance of supercomputers with petascale performance capabilities, and beyond. And significant technology developments are also occurring in addition to traditional CPU-based architectures in response to the growing energy requirements of these architectures, including graphical processing units (GPU), Cell processors, and other highly parallel, many-core processor technologies. There have been significant efforts to exploit these resources in the computational astrophysics research community. This talk will focus on recent results from breakthrough astrophysics simulations made possible by modern application software and leadership-class compute and data resources, give prospects and opportunities for today's petascale problems, and highlight the computation needs of astrophysics requiring an order of magnitude greater compute and data capability. We will focus on the early outcomes from the Department of Energy's Titan supercomputer managed by the Oak Ridge Leadership Computing Facility. Titan's Cray XK7 architecture has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. With its hybrid, accelerated architecture, Titan allows advanced scientific applications to reach speeds exceeding 20 petaflops with a marginal increase in electrical power demand over the previous generation leadership-class supercomputer.

  1. Spack: the Supercomputing Package Manager

    SciTech Connect

    Gamblin, T.

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstacle deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.

  2. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  3. Supercomputing 2002: NAS Demo Abstracts

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor)

    2002-01-01

    The hyperwall is a new concept in visual supercomputing, conceived and developed by the NAS Exploratory Computing Group. The hyperwall will allow simultaneous and coordinated visualization and interaction of an array of processes, such as a the computations of a parameter study or the parallel evolutions of a genetic algorithm population. Making over 65 million pixels available to the user, the hyperwall will enable and elicit qualitatively new ways of leveraging computers to accomplish science. It is currently still unclear whether we will be able to transport the hyperwall to SC02. The crucial display frame still has not been completed by the metal fabrication shop, although they promised an August delivery. Also, we are still working the fragile node issue, which may require transplantation of the compute nodes from the present 2U cases into 3U cases. This modification will increase the present 3-rack configuration to 5 racks.

  4. TOP500 Supercomputers for November 2003

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  5. Computational models and resource allocation for supercomputers

    NASA Technical Reports Server (NTRS)

    Mauney, Jon; Agrawal, Dharma P.; Harcourt, Edwin A.; Choe, Young K.; Kim, Sukil

    1989-01-01

    There are several different architectures used in supercomputers, with differing computational models. These different models present a variety of resource allocation problems that must be solved. The computational needs of a program must be cast in terms of the computational model supported by the supercomputer, and this must be done in a way that makes effective use of the machine's resources. This is the resource allocation problem. The computational models of available supercomputers and the associated resource allocation techniques are surveyed. It is shown that many problems and solutions appear repeatedly in very different computing environments. Some case studies are presented, showing concrete computational models and the allocation strategies used.

  6. A guidebook to Fortran on supercomputers

    SciTech Connect

    Levesque, J.M.; Williamson, J.W.

    1988-01-01

    This book, explains in detail both the underlying architecture of today's supercomputers and the manner by which a compiler maps Fortran code onto that architecture. Most important, the constructs preventing full optimizations are outlined, and specific strategies for restructuring a program are provided. Based on the author's actual experience restructuring existing programs for particular supercomputers, the book generally follows the format of a series of supercomputer seminars that they regularly present on a worldwide basis. All examples are explained with actual Fortran code; no mathematical abstractions such as dataflow graphs are used.

  7. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    SciTech Connect

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  8. Proceedings of the first energy research power supercomputer users symposium

    SciTech Connect

    Not Available

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  9. The Pawsey Supercomputer geothermal cooling project

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  10. Supercomputers open window of opportunity for nursing.

    PubMed

    Meintz, S L

    1993-01-01

    A window of opportunity was opened for nurse researchers with the High Performance Computing and Communications (HPCC) initiative in President Bush's 1992 fiscal-year budget. Nursing research moved into the high-performance computing environment through the University of Nevada Las Vegas/Cray Project for Nursing and Health Data Research (PNHDR). USing the CRAY YMP 2/216 supercomputer, the PNHDR established the validity of a supercomputer platform for nursing research. In addition, the research has identified a paradigm shift in statistical analysis, delineated actual and potential barriers to nursing research in a supercomputing environment, conceptualized a new branch of nursing science called Nurmetrics, and discovered new avenue for nursing research utilizing supercomputing tools.

  11. Supercomputing activities at the SSC Laboratory

    SciTech Connect

    Yan, Y.; Bourianoff, G.

    1991-09-01

    Supercomputers are used to simulate and track particle motion around the collider rings and the associated energy boosters of the Superconducting Super Collider (SSC). These numerical studies will aid in determining the best design for the SSC.

  12. Graphics Flip Cube for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Gong, Chris; Reid, Lisa (Technical Monitor)

    1998-01-01

    Flip cube (constructed of heavy plastic) displays 11 graphics representing current projects or demos from 5 NASA centers participating in Supercomputing '98 (SC98). Included with the images are the URLS and names of the NASA centers.

  13. Funding boost for Santos Dumont supercomputer

    NASA Astrophysics Data System (ADS)

    Leite Vieira, Cássio

    2016-09-01

    The fastest supercomputer in Latin America returned to full usage last month following two months of minimal operations after Gilberto Kassab, Brazil's new science minister, agreed to plug a R4.6m (1.5m) funding gap.

  14. Multiple crossbar network: Integrated supercomputing framework

    SciTech Connect

    Hoebelheinrich, R. )

    1989-01-01

    At Los Alamos National Laboratory, site of one of the world's most powerful scientific supercomputing facilities, a prototype network for an environment that links supercomputers and workstations is being developed. Driven by a need to provide graphics data at movie rates across a network from a Cray supercomputer to a Sun scientific workstation, the network is called the Multiple Crossbar Network (MCN). It is intended to be coarsely grained, loosely coupled, general-purpose interconnection network that will vastly increase the speed at which supercomputers communicate with each other in large networks. The components of the network are described, as well as work done in collaboration with vendors who are interested in providing commercial products. 9 refs.

  15. Computational models and resource allocation for supercomputers

    SciTech Connect

    Mauney, J.; Harcourt, E.A. ); Agrawal, D.P. . Dept. of Electrical and Computer Engineering); Choe, Y.K. ); Kim, S. ); Staats, W.J. )

    1989-12-01

    Supercomputers are capable of providing tremendous computational power, but must be carefully programmed to take advantage of that power. There are several different architectures used in supercomputers, with differing computational models. These different models present a variety of resource allocation problems that must be solved. The computational needs of a program must be cast in terms of the computational model supported by the supercomputer, and this must be done in a way that makes effective use of the machine's resources. This is the resource allocation problem. The computational models of available supercomputers and the associated resource allocation techniques are surveyed. It is shown that many problems and solutions appear repeatedly in very different computing environments.

  16. Introducing Mira, Argonne's Next-Generation Supercomputer

    SciTech Connect

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  17. Simulating performance sensitivity of supercomputer job parameters.

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on the use of a supercomputer simulation to study the performance sensitivity to systematic changes in the job parameters of run time, number of CPUs, and interarrival time. We also examine the effect of changes in share allocation and service ratio for job prioritization under a Fair Share queuing Algorithm to see the effect on facility figures of merit. We used log data from the ASCI supercomputer Blue Mountain and the ASCI simulator BIRMinator to perform this study. The key finding is that the performance of the supercomputer is quite sensitive to all the job parameters with the interarrival rate of the jobs being most sensitive at the highest rates and increasing run times the least sensitive job parameter with respect to utilization and rapid turnaround. We also find that this facility is running near its maximum practical utilization. Finally, we show the importance of the use of simulation in understanding the performance sensitivity of a supercomputer.

  18. Misleading Performance Reporting in the Supercomputing Field

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Kutler, Paul (Technical Monitor)

    1992-01-01

    In a previous humorous note, I outlined twelve ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  19. TOP500 Supercomputers for November 2004

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  20. Simulating Galactic Winds on Supercomputers

    NASA Astrophysics Data System (ADS)

    Schneider, Evan

    2017-01-01

    Galactic winds are a ubiquitous feature of rapidly star-forming galaxies. Observations of nearby galaxies have shown that winds are complex, multiphase phenomena, comprised of outflowing gas at a large range of densities, temperatures, and velocities. Describing how starburst-driven outflows originate, evolve, and affect the circumgalactic medium and gas supply of galaxies is an important challenge for theories of galaxy evolution. In this talk, I will discuss how we are using a new hydrodynamics code, Cholla, to improve our understanding of galactic winds. Cholla is a massively parallel, GPU-based code that takes advantage of specialized hardware on the newest generation of supercomputers. With Cholla, we can perform large, three-dimensional simulations of multiphase outflows, allowing us to track the coupling of mass and momentum between gas phases across hundreds of parsecs at sub-parsec resolution. The results of our recent simulations demonstrate that the evolution of cool gas in galactic winds is highly dependent on the initial structure of embedded clouds. In particular, we find that turbulent density structures lead to more efficient mass transfer from cool to hot phases of the wind. I will discuss the implications of our results both for the incorporation of winds into cosmological simulations, and for interpretations of observed multiphase winds and the circumgalatic medium of nearby galaxies.

  1. Taking ASCI supercomputing to the end game.

    SciTech Connect

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  2. Academic and Career Advancement for Black Male Athletes at NCAA Division I Institutions

    ERIC Educational Resources Information Center

    Baker, Ashley R.; Hawkins, Billy J.

    2016-01-01

    This chapter examines the structural arrangements and challenges many Black male athletes encounter as a result of their use of sport for upward social mobility. Recommendations to enhance their preparation and advancement are provided.

  3. Academic and Career Advancement for Black Male Athletes at NCAA Division I Institutions

    ERIC Educational Resources Information Center

    Baker, Ashley R.; Hawkins, Billy J.

    2016-01-01

    This chapter examines the structural arrangements and challenges many Black male athletes encounter as a result of their use of sport for upward social mobility. Recommendations to enhance their preparation and advancement are provided.

  4. A modified orthodontic protocol for advanced periodontal disease in Class II division 1 malocclusion.

    PubMed

    Janson, Marcos; Janson, Guilherme; Murillo-Goizueta, Oscar Edwin Francisco

    2011-04-01

    An interdisciplinary approach is often the best option for achieving a predictable outcome for an adult patient with complex clinical problems. This case report demonstrates the combined periodontal/orthodontic treatment for a 49-year-old woman presenting with a Class II Division 1 malocclusion with moderate maxillary anterior crowding, a 9-mm overjet, and moderate to severe bone loss as the main characteristics of the periodontal disease. The orthodontic treatment included 2 maxillary first premolar extractions through forced extrusion. Active orthodontic treatment was completed in 30 months. The treatment outcomes, including the periodontal condition, were stable 17 months after active orthodontic treatment. The advantages of this interdisciplinary approach are discussed. Periodontally compromised orthodontic patients can be satisfactorily treated, achieving most of the conventional orthodontic goals, if a combined orthodontic/periodontic approach is used.

  5. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    SciTech Connect

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  6. Use of Convex supercomputers for flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1992-01-01

    The use of the Convex Computer Corporation supercomputers for flight simulation is discussed focusing on a real-time input/output system for supporting the flight simulation. The flight simulation computing system is based on two single processor control data corporation CYBER 175 computers, coupled through extended memory. The Advanced Real-Time Simulation System for digital data distribution and signal conversion is a state-of-the-art, high-speed fiber-optic-based, ring network system which is based on the computer automated measurement and control technology.

  7. Using Supercomputers to Model and Design Novel Materials

    NASA Astrophysics Data System (ADS)

    Richardson, Steven L.

    1997-03-01

    Recent advances in computational materials science and theoretical condensed matter physics, coupled with the power and speed of modern supercomputers, have enabled scientists and engineers to design and study novel materials from a first-principles viewpoint. Such calculations have become increasingly useful in their ability to explain experimental properties of materials, as well as to predict novel behavior. As an example of such methods, we will discuss some of our recent calculations on the structural and electronic properties of the energetic material, solid cubane.(S. L. Richardson and J. L. Martins, submitted for publication.)

  8. 11th Annual NIH Pain Consortium Symposium on Advances in Pain Research | Division of Cancer Prevention

    Cancer.gov

    The NIH Pain Consortium will convene the 11th Annual NIH Pain Consortium Symposium on Advances in Pain Research, featuring keynote speakers and expert panel sessions on Innovative Models and Methods. The first keynote address will be delivered by David J. Clark, MD, PhD, Stanford University entitled “Challenges of Translational Pain Research: What Makes a Good Model?” |

  9. Palliative Care Improves Survival, Quality of Life in Advanced Lung Cancer | Division of Cancer Prevention

    Cancer.gov

    Results from the first randomized clinical trial of its kind have revealed a surprising and welcome benefit of early palliative care for patients with advanced lung cancer—longer median survival. Although several researchers said that the finding needs to be confirmed in other trials of patients with other cancer types, they were cautiously optimistic that the trial results could influence oncologists’ perceptions and use of palliative care. |

  10. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  11. TOP500 Supercomputers for November 2002

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  12. TOP500 Supercomputers for June 2003

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  13. Characterizing output bottlenecks in a supercomputer

    SciTech Connect

    Xie, Bing; Chase, Jeffrey; Dillow, David A; Drokin, Oleg; Klasky, Scott A; Oral, H Sarp; Podhorszki, Norbert

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic, contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.

  14. TOP500 Supercomputers for June 2002

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  15. Advanced Spatial-Division Multiplexed Measurement Systems Propositions—From Telecommunication to Sensing Applications: A Review

    PubMed Central

    Weng, Yi; Ip, Ezra; Pan, Zhongqi; Wang, Ting

    2016-01-01

    The concepts of spatial-division multiplexing (SDM) technology were first proposed in the telecommunications industry as an indispensable solution to reduce the cost-per-bit of optical fiber transmission. Recently, such spatial channels and modes have been applied in optical sensing applications where the returned echo is analyzed for the collection of essential environmental information. The key advantages of implementing SDM techniques in optical measurement systems include the multi-parameter discriminative capability and accuracy improvement. In this paper, to help readers without a telecommunication background better understand how the SDM-based sensing systems can be incorporated, the crucial components of SDM techniques, such as laser beam shaping, mode generation and conversion, multimode or multicore elements using special fibers and multiplexers are introduced, along with the recent developments in SDM amplifiers, opto-electronic sources and detection units of sensing systems. The examples of SDM-based sensing systems not only include Brillouin optical time-domain reflectometry or Brillouin optical time-domain analysis (BOTDR/BOTDA) using few-mode fibers (FMF) and the multicore fiber (MCF) based integrated fiber Bragg grating (FBG) sensors, but also involve the widely used components with their whole information used in the full multimode constructions, such as the whispering gallery modes for fiber profiling and chemical species measurements, the screw/twisted modes for examining water quality, as well as the optical beam shaping to improve cantilever deflection measurements. Besides, the various applications of SDM sensors, the cost efficiency issue, as well as how these complex mode multiplexing techniques might improve the standard fiber-optic sensor approaches using single-mode fibers (SMF) and photonic crystal fibers (PCF) have also been summarized. Finally, we conclude with a prospective outlook for the opportunities and challenges of SDM

  16. Advanced Spatial-Division Multiplexed Measurement Systems Propositions-From Telecommunication to Sensing Applications: A Review.

    PubMed

    Weng, Yi; Ip, Ezra; Pan, Zhongqi; Wang, Ting

    2016-08-30

    The concepts of spatial-division multiplexing (SDM) technology were first proposed in the telecommunications industry as an indispensable solution to reduce the cost-per-bit of optical fiber transmission. Recently, such spatial channels and modes have been applied in optical sensing applications where the returned echo is analyzed for the collection of essential environmental information. The key advantages of implementing SDM techniques in optical measurement systems include the multi-parameter discriminative capability and accuracy improvement. In this paper, to help readers without a telecommunication background better understand how the SDM-based sensing systems can be incorporated, the crucial components of SDM techniques, such as laser beam shaping, mode generation and conversion, multimode or multicore elements using special fibers and multiplexers are introduced, along with the recent developments in SDM amplifiers, opto-electronic sources and detection units of sensing systems. The examples of SDM-based sensing systems not only include Brillouin optical time-domain reflectometry or Brillouin optical time-domain analysis (BOTDR/BOTDA) using few-mode fibers (FMF) and the multicore fiber (MCF) based integrated fiber Bragg grating (FBG) sensors, but also involve the widely used components with their whole information used in the full multimode constructions, such as the whispering gallery modes for fiber profiling and chemical species measurements, the screw/twisted modes for examining water quality, as well as the optical beam shaping to improve cantilever deflection measurements. Besides, the various applications of SDM sensors, the cost efficiency issue, as well as how these complex mode multiplexing techniques might improve the standard fiber-optic sensor approaches using single-mode fibers (SMF) and photonic crystal fibers (PCF) have also been summarized. Finally, we conclude with a prospective outlook for the opportunities and challenges of SDM

  17. Advanced time and wavelength division multiplexing for metropolitan area optical data communication networks

    NASA Astrophysics Data System (ADS)

    Watford, M.; DeCusatis, C.

    2005-09-01

    With the advent of new regulations governing the protection and recovery of sensitive business data, including the Sarbanes-Oxley Act, there has been a renewed interest in business continuity and disaster recovery applications for metropolitan area networks. Specifically, there has been a need for more efficient bandwidth utilization and lower cost per channel to facilitate mirroring of multi-terabit data bases. These applications have further blurred the boundary between metropolitan and wide area networks, with synchronous disaster recovery applications running up to 100 km and asynchronous solutions extending to 300 km or more. In this paper, we discuss recent enhancements in the Nortel Optical Metro 5200 Dense Wavelength Division Multiplexing (DWDM) platform, including features recently qualified for data communication applications such as Metro Mirror, Global Mirror, and Geographically Distributed Parallel Sysplex (GDPS). Using a 10 Gigabit/second (Gbit/s) backbone, this solution transports significantly more Fibre Channel protocol traffic with up to five times greater hardware density in the same physical package. This is also among the first platforms to utilize forward error correction (FEC) on the aggregate signals to improve bit error rate (BER) performance beyond industry standards. When combined with encapsulation into wide area network protocols, the use of FEC can compensate for impairments in BER across a service provider infrastructure without impacting application level performance. Design and implementation of these features will be discussed, including results from experimental test beds which validate these solutions for a number of applications. Future extensions of this environment will also be considered, including ways to provide configurable bandwidth on demand, mitigate Fibre Channel buffer credit management issues, and support for other GDPS protocols.

  18. Intelligent supercomputers: the Japanese computer sputnik

    SciTech Connect

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  19. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  20. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  1. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  2. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  3. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  4. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  5. Optimization of Supercomputer Use on EADS II System

    NASA Technical Reports Server (NTRS)

    Ahmed, Ardsher

    1998-01-01

    The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.

  6. Supercomputer Issues from a University Perspective.

    ERIC Educational Resources Information Center

    Beering, Steven C.

    1984-01-01

    Discusses issues related to the access of and training of university researchers in using supercomputers, considering National Science Foundation's (NSF) role in this area, microcomputers on campuses, and the limited use of existing telecommunication networks. Includes examples of potential scientific projects (by subject area) utilizing…

  7. Library Services in a Supercomputer Center.

    ERIC Educational Resources Information Center

    Layman, Mary

    1991-01-01

    Describes library services that are offered at the San Diego Supercomputer Center (SDSC), which is located at the University of California at San Diego. Topics discussed include the user population; online searching; microcomputer use; electronic networks; current awareness programs; library catalogs; and the slide collection. A sidebar outlines…

  8. NSF Establishes First Four National Supercomputer Centers.

    ERIC Educational Resources Information Center

    Lepkowski, Wil

    1985-01-01

    The National Science Foundation (NSF) has awarded support for supercomputer centers at Cornell University, Princeton University, University of California (San Diego), and University of Illinois. These centers are to be the nucleus of a national academic network for use by scientists and engineers throughout the United States. (DH)

  9. Very Large Least Squares Problems and Supercomputers,

    DTIC Science & Technology

    1984-12-31

    structures) Purdue University Ahmed Sameh (supercomputers) University of Illinois 6. REFERENCES Abad-Zapatero, C., Abdel-Meguid, S.S., Johnson, J.E...pp. 784-811. Sameh , A., Solving the linear least squares problem on a linear array of proces- sors, In: Purdue Workshop on Algorithmically

  10. Virtual supercomputing on Macintosh desktop computers

    SciTech Connect

    Krovchuck, K.

    1996-05-01

    Many computing problems of today require supercomputer performance, but do not justify the costs needed to run such applications on supercomputers. In order to fill this need, networks of high-end workstations are often linked-together to act as a single virtual parallel supercomputer. This project attempts to develop software that will allow less expensive `desktop` computers to emulate a parallel supercomputer. To demonstrate the viability of the software, it is being integrated with POV, a retracing package that is both computationally expensive and easily modified for parallel systems. The software was developed using the Metrowerks Codewarrier Version 6.0 compiler on a Power Macintosh 7500 computer. The software is designed to run on a cluster of power macs running system 7.1 or greater on an ethernet network. Currently, because of limitations of both the operating system and the Metrowerks compiler, the software is forced to make use of slower, high level communication interfaces. Both the operating system and the compiler software are under revision however, and these revisions will increase the performance of the system as a whole.

  11. Roadrunner Supercomputer Breaks the Petaflop Barrier

    ScienceCinema

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2016-07-12

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  12. Computational plasma physics and supercomputers. Revision 1

    SciTech Connect

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models.

  13. Roadrunner Supercomputer Breaks the Petaflop Barrier

    SciTech Connect

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2008-06-09

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  14. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  15. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    SciTech Connect

    Tang, William; Wang, Bei; Ethier, Stephane; Kwasniewski, Grzegorz; Hoefler, Torsten; Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; Oliker, Leonid; Rosales-Fernandez, Carlos; Williams, Tim

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in the GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.

  16. Advances in astronomy (Scientific session of the Physical Sciences Division of the Russian Academy of Sciences, 27 February 2013)

    NASA Astrophysics Data System (ADS)

    2013-07-01

    A scientific session of the Division of Physical Sciences of the Russian Academy of Sciences (RAS), entitled "Advances in Astronomy" was held on 27 February 2013 at the conference hall of the Lebedev Physical Institute, RAS. The following reports were put on the session agenda posted on the website http://www.gpad.ac.ru of the RAS Physical Sciences Division: (1) Chernin A D (Sternberg Astronomical Institute, Moscow State University, Moscow) "Dark energy in the local Universe: HST data, nonlinear theory, and computer simulations"; (2) Gnedin Yu N (Main (Pulkovo) Astronomical Observatory, RAS, St. Petersburg) "A new method of supermassive black hole studies based on polarimetric observations of active galactic nuclei"; (3) Efremov Yu N (Sternberg Astronomical Institute, Moscow State University, Moscow) "Our Galaxy: grand design and moderately active nucleus"; (4) Gilfanov M R (Space Research Institute, RAS, Moscow) "X-ray binaries, star formation, and type-Ia supernova progenitors"; (5) Balega Yu Yu (Special Astrophysical Observatory, RAS, Nizhnii Arkhyz, Karachaevo-Cherkessia Republic) "The nearest 'star factory' in the Orion Nebula"; (6) Bisikalo D V (Institute of Astronomy, RAS, Moscow) "Atmospheres of giant exoplanets"; (7) Korablev O I (Space Research Institute, RAS, Moscow) "Spectroscopy of the atmospheres of Venus and Mars: new methods and new results"; (8) Ipatov A V (Institute of Applied Astronomy, RAS, St. Petersburg) "A new-generation radio interferometer for fundamental and applied research". Summaries of the papers based on reports 1, 2, 4, 7, 8 are given below. • Dark energy in the nearby Universe: HST data, nonlinear theory, and computer simulations, A D Chernin Physics-Uspekhi, 2013, Volume 56, Number 7, Pages 704-709 • Investigating supermassive black holes: a new method based on the polarimetric observations of active galactic nuclei, Yu N Gnedin Physics-Uspekhi, 2013, Volume 56, Number 7, Pages 709-714 • X-ray binaries and star formation, M R

  17. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  18. Adventures in Supercomputing: An innovative program

    SciTech Connect

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  19. Visualization of supercomputer simulations in physics

    NASA Technical Reports Server (NTRS)

    Watson, Val; Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Walatka, Pamela P.

    1989-01-01

    A description is given of the hardware and software tools and techniques in use at NASA's Numerical Aerodynamic Simulation Facility for visualization of computational fluid dynamics. The hardware consists of high-performance graphics workstations connected to the supercomputer with high-bandwidth lines, a frame buffer connected to the supercomputer with UltraNet, a digital video recording system, and film recorders. The software permits the scientist to view the three-dimensional scenes dynamically, to zoom into a region of interest, and to rotate his viewing position to study any region of interest in more detail. The software also provides automated animation and video recording of the scenes. The digital effects unit on the video system facilitates comparison of computer simulations with flight or wind tunnel experiments.

  20. Travel from a supercomputer to killer micros

    SciTech Connect

    Werner, N.E.

    1991-03-01

    I describe my effort to convert a Fortran application that runs on a parallel supercomputer (Cray Y/MP) to run on a set of BBN TC2000 killer micros. I used both shared memory parallel processing options available at MPCI for the BBN TC2000, the Parallel Fortran Preprocessor (PFP) and the Uniform System extended Fortran compiler (US). I describe how I used the BBN Xtra programming tools for analysis and debugging during this conversion process. My ultimate goal for this hands on experiment was to gain insight into the type of tools that might be helpful for porting existing programs from a supercomputer environment to a killer micro environment. 5 refs., 9 figs.

  1. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2016-11-30

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  2. A Layered Solution for Supercomputing Storage

    SciTech Connect

    Grider, Gary

    2016-11-16

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  3. Mantle convection on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Weismüller, Jens; Gmeiner, Björn; Mohr, Marcus; Waluga, Christian; Wohlmuth, Barbara; Rüde, Ulrich; Bunge, Hans-Peter

    2015-04-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures demand an interdisciplinary co-design. Here we report about recent advances of the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups in computer sciences, mathematics and geophysical application under the leadership of FAU Erlangen. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection assessing the impact of small scale processes on global mantle flow.

  4. Molecular simulation of rheological properties using massively parallel supercomputers

    SciTech Connect

    Bhupathiraju, R.K.; Cui, S.T.; Gupta, S.A.; Cummings, P.T.; Cochran, H.D.

    1996-11-01

    Advances in parallel supercomputing now make possible molecular-based engineering and science calculations that will soon revolutionize many technologies, such as those involving polymers and those involving aqueous electrolytes. We have developed a suite of message-passing codes for classical molecular simulation of such complex fluids and amorphous materials and have completed a number of demonstration calculations of problems of scientific and technological importance with each. In this paper, we will focus on the molecular simulation of rheological properties, particularly viscosity, of simple and complex fluids using parallel implementations of non-equilibrium molecular dynamics. Such calculations represent significant challenges computationally because, in order to reduce the thermal noise in the calculated properties within acceptable limits, large systems and/or long simulated times are required.

  5. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on

  6. Data-intensive computing on numerically-insensitive supercomputers

    SciTech Connect

    Ahrens, James P; Fasel, Patricia K; Habib, Salman; Heitmann, Katrin; Lo, Li - Ta; Patchett, John M; Williams, Sean J; Woodring, Jonathan L; Wu, Joshua; Hsu, Chung - Hsing

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  7. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  8. Spatiotemporal modeling of node temperatures in supercomputers

    SciTech Connect

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; Ticknor, Lawrence O.; Bonnie, Amanda Marie; Montoya, Andrew J.; Michalak, Sarah E.

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this and other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.

  9. Spatiotemporal modeling of node temperatures in supercomputers

    SciTech Connect

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; Ticknor, Lawrence O.; Bonnie, Amanda Marie; Montoya, Andrew J.; Michalak, Sarah E.

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this and other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.

  10. Towards 21st century stellar models: Star clusters, supercomputing and asteroseismology

    NASA Astrophysics Data System (ADS)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.; Meakin, C.; Stello, D.; Christensen-Dalsgaard, J.; Kuehn, C.; De Silva, G. M.; Arnett, W. D.; Lattanzio, J. C.; MacLean, B. T.

    2016-09-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy - through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys - are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling. This study uses observational data from from HST, VLT, AAT, Kepler, and supercomputing resources in Australia provided by the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre.

  11. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  12. Parallel supercomputers for lattice gauge theory.

    PubMed

    Brown, F R; Christ, N H

    1988-03-18

    During the past 10 years, particle physicists have increasingly employed numerical simulation to answer fundamental theoretical questions about the properties of quarks and gluons. The enormous computer resources required by quantum chromodynamic calculations have inspired the design and construction of very powerful, highly parallel, dedicated computers optimized for this work. This article gives a brief description of the numerical structure and current status of these large-scale lattice gauge theory calculations, with emphasis on the computational demands they make. The architecture, present state, and potential of these special-purpose supercomputers is described. It is argued that a numerical solution of low energy quantum chromodynamics may well be achieved by these machines.

  13. Compositional reservoir simulation in parallel supercomputing environments

    SciTech Connect

    Briens, F.J.L. ); Wu, C.H. ); Gazdag, J.; Wang, H.H. )

    1991-09-01

    A large-scale compositional reservoir simulation ({gt}1,000 cells) is not often run on a conventional mainframe computer owing to excessive turnaround times. This paper presents programming and computational techniques that fully exploit the capabilities of parallel supercomputers for a large-scale compositional simulation. A novel algorithm called sequential staging of tasks (SST) that can take full advantage of parallel-vector processing to speed up the solution of a large linear system is introduced. The effectiveness of SST is illustrated with results from computer experiments conducted on an IBM 3090-600E.

  14. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    SciTech Connect

    Not Available

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  15. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  16. A network architecture for Petaflops supercomputers.

    SciTech Connect

    DeBenedictis, Erik P.

    2003-09-01

    If we are to build a supercomputer with a speed of 10{sup 15} floating operations per second (1 PetaFLOPS), interconnect technology will need to be improved considerably over what it is today. In this report, we explore one possible interconnect design for such a network. The guiding principle in this design is the optimization of all components for the finiteness of the speed of light. To achieve a linear speedup in time over well-tested supercomputers of todays' designs will require scaling up of processor power and bandwidth and scaling down of latency. Latency scaling is the most challenging: it requires a 100 ns user-to-user latency for messages traveling the full diameter of the machine. To meet this constraint requires simultaneously minimizing wire length through 3D packaging, new low-latency electrical signaling mechanisms, extremely fast routers, and new network interfaces. In this report, we outline approaches and implementations that will meet the requirements when implemented as a system. No technology breakthroughs are required.

  17. Concurrent visualization in a production supercomputing environment.

    PubMed

    Ellsworth, David; Green, Bryan; Henze, Chris; Moran, Patrick; Sandstrom, Timothy

    2006-01-01

    We describe a concurrent visualization pipeline designed for operation in a production supercomputing environment. The facility was initially developed on the NASA Ames "Columbia" supercomputer for a massively parallel forecast model (GEOS4). During the 2005 Atlantic hurricane season, GEOS4 was run 4 times a day under tight time constraints so that its output could be included in an ensemble prediction that was made available to forecasters at the National Hurricane Center. Given this time-critical context, we designed a configurable concurrent pipeline to visualize multiple global fields without significantly affecting the runtime model performance or reliability. We use MPEG compression of the accruing images to facilitate live low-bandwidth distribution of multiple visualization streams to remote sites. We also describe the use of our concurrent visualization framework with a global ocean circulation model, which provides a 864-fold increase in the temporal resolution of practically achievable animations. In both the atmospheric and oceanic circulation models, the application scientists gained new insights into their model dynamics, due to the high temporal resolution animations attainable.

  18. Most Social Scientists Shun Free Use of Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  19. Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)

    SciTech Connect

    Not Available

    2011-02-01

    Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.

  20. Stochastic simulation of electron avalanches on supercomputers

    SciTech Connect

    Rogasinsky, S. V.; Marchenko, M. A.

    2014-12-09

    In the paper, we present a three-dimensional parallel Monte Carlo algorithm named ELSHOW which is developed for simulation of electron avalanches in gases. Parallel implementation of the ELSHOW was made on supercomputers with different architectures (massive parallel and hybrid ones). Using the ELSHOW, calculations of such integral characteristics as the number of particles in an avalanche, the coefficient of impact ionization, the drift velocity, and the others were made. Also, special precise computations were made to select an appropriate size of the time step using the technique of dependent statistical tests. Particularly, the algorithm consists of special methods of distribution modeling, a lexicographic implementation scheme for “branching” of trajectories, justified estimation of functionals. A comparison of the obtained results for nitrogen with previously published theoretical and experimental data was made.

  1. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  2. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  3. Will Your Next Supercomputer Come from Costco?

    SciTech Connect

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way through the procurement process.

  4. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    SciTech Connect

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  5. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    SciTech Connect

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold Edward; Stevenson, Joel O.; Benner, Robert E., Jr.; Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  6. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  7. Post-remedial-action radiological survey of the Westinghouse Advanced Reactors Division Plutonium Fuel Laboratories, Cheswick, Pennsylvania, October 1-8, 1981

    SciTech Connect

    Flynn, K.F.; Justus, A.L.; Sholeen, C.M.; Smith, W.H.; Wynveen, R.A.

    1984-01-01

    The post-remedial-action radiological assessment conducted by the ANL Radiological Survey Group in October 1981, following decommissioning and decontamination efforts by Westinghouse personnel, indicated that except for the Advanced Fuels Laboratory exhaust ductwork and north wall, the interior surfaces of the Plutonium Laboratory and associated areas within Building 7 and the Advanced Fuels Laboratory within Building 8 were below both the ANSI Draft Standard N13.12 and NRC Guideline criteria for acceptable surface contamination levels. Hence, with the exceptions noted above, the interior surfaces of those areas within Buildings 7 and 8 that were included in the assessment are suitable for unrestricted use. Air samples collected at the involved areas within Buildings 7 and 8 indicated that the radon, thoron, and progeny concentrations within the air were well below the limits prescribed by the US Surgeon General, the Environmental Protection Agency, and the Department of Energy. The Building 7 drain lines are contaminated with uranium, plutonium, and americium. Radiochemical analysis of water and dirt/sludge samples collected from accessible Low-Bay, High-Bay, Shower Room, and Sodium laboratory drains revealed uranium, plutonium, and americium contaminants. The Building 7 drain lines hence are unsuitable for release for unrestricted use in their present condition. Low levels of enriched uranium, plutonium, and americium were detected in an environmental soil coring near Building 8, indicating release or spillage due to Advanced Reactors Division activities or Nuclear Fuel Division activities undr NRC licensure. /sup 60/Co contamination was detected within the Building 7 Shower Room and in soil corings from the environs of Building 7. All other radionuclide concentrations measured in soil corings and the storm sewer outfall sample collected from the environs about Buildings 7 and 8 were within the range of normally expected background concentrations.

  8. Developing Fortran Code for Kriging on the Stampede Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  9. Towards Efficient Supercomputing: Searching for the Right Efficiency Metric

    SciTech Connect

    Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W

    2012-01-01

    The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has been and how it stands today with respect to efficient supercomputing.

  10. Advances in the discovery of novel antimicrobials targeting the assembly of bacterial cell division protein FtsZ.

    PubMed

    Li, Xin; Ma, Shutao

    2015-05-05

    Currently, wide-spread antimicrobials resistance among bacterial pathogens continues being a dramatically increasing and serious threat to public health, and thus there is a pressing need to develop new antimicrobials to keep pace with the bacterial resistance. Filamentous temperature-sensitive protein Z (FtsZ), a prokaryotic cytoskeleton protein, plays an important role in bacterial cell division. It as a very new and promising target, garners special attention in the antibacterial research in the recent years. This review describes not only the function and dynamic behaviors of FtsZ, but also the known natural and synthetic inhibitors of FtsZ. In particular, the small molecules recently developed and the future directions of ideal candidates are highlighted.

  11. UNIX security in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1989-01-01

    The author critiques some security mechanisms in most versions of the Unix operating system and suggests more effective tools that either have working prototypes or have been implemented, for example in secure Unix systems. Although no computer (not even a secure one) is impenetrable, breaking into systems with these alternate mechanisms will cost more, require more skill, and be more easily detected than penetrations of systems without these mechanisms. The mechanisms described fall into four classes (with considerable overlap). User authentication at the local host affirms the identity of the person using the computer. The principle of least privilege dictates that properly authenticated users should have rights precisely sufficient to perform their tasks, and system administration functions should be compartmentalized; to this end, access control lists or capabilities should either replace or augment the default Unix protection system, and mandatory access controls implementing multilevel security models and integrity mechanisms should be available. Since most users access supercomputing environments using networks, the third class of mechanisms augments authentication (where feasible). As no security is perfect, the fourth class of mechanism logs events that may indicate possible security violations; this will allow the reconstruction of a successful penetration (if discovered), or possibly the detection of an attempted penetration.

  12. UNIX security in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1989-01-01

    The author critiques some security mechanisms in most versions of the Unix operating system and suggests more effective tools that either have working prototypes or have been implemented, for example in secure Unix systems. Although no computer (not even a secure one) is impenetrable, breaking into systems with these alternate mechanisms will cost more, require more skill, and be more easily detected than penetrations of systems without these mechanisms. The mechanisms described fall into four classes (with considerable overlap). User authentication at the local host affirms the identity of the person using the computer. The principle of least privilege dictates that properly authenticated users should have rights precisely sufficient to perform their tasks, and system administration functions should be compartmentalized; to this end, access control lists or capabilities should either replace or augment the default Unix protection system, and mandatory access controls implementing multilevel security models and integrity mechanisms should be available. Since most users access supercomputing environments using networks, the third class of mechanisms augments authentication (where feasible). As no security is perfect, the fourth class of mechanism logs events that may indicate possible security violations; this will allow the reconstruction of a successful penetration (if discovered), or possibly the detection of an attempted penetration.

  13. An orthogonal wavelet division multiple-access processor architecture for LTE-advanced wireless/radio-over-fiber systems over heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Mahapatra, Chinmaya; Leung, Victor CM; Stouraitis, Thanos

    2014-12-01

    The increase in internet traffic, number of users, and availability of mobile devices poses a challenge to wireless technologies. In long-term evolution (LTE) advanced system, heterogeneous networks (HetNet) using centralized coordinated multipoint (CoMP) transmitting radio over optical fibers (LTE A-ROF) have provided a feasible way of satisfying user demands. In this paper, an orthogonal wavelet division multiple-access (OWDMA) processor architecture is proposed, which is shown to be better suited to LTE advanced systems as compared to orthogonal frequency division multiple access (OFDMA) as in LTE systems 3GPP rel.8 (3GPP, http://www.3gpp.org/DynaReport/36300.htm). ROF systems are a viable alternative to satisfy large data demands; hence, the performance in ROF systems is also evaluated. To validate the architecture, the circuit is designed and synthesized on a Xilinx vertex-6 field-programmable gate array (FPGA). The synthesis results show that the circuit performs with a clock period as short as 7.036 ns (i.e., a maximum clock frequency of 142.13 MHz) for transform size of 512. A pipelined version of the architecture reduces the power consumption by approximately 89%. We compare our architecture with similar available architectures for resource utilization and timing and provide performance comparison with OFDMA systems for various quality metrics of communication systems. The OWDMA architecture is found to perform better than OFDMA for bit error rate (BER) performance versus signal-to-noise ratio (SNR) in wireless channel as well as ROF media. It also gives higher throughput and mitigates the bad effect of peak-to-average-power ratio (PAPR).

  14. Simulating functional magnetic materials on supercomputers.

    PubMed

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram.

  15. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  16. Computational fluid dynamics and supercomputers, chapter 6

    NASA Astrophysics Data System (ADS)

    Gentzsch, W.

    1988-03-01

    It is important to optimally adapt codes and algorithms to the vector or parallel computer in use. In addition to faster and larger supercomputers, users must be much better trained than for (scalar) general purpose computers. Details on restructuring typical numerical algorithms to achieve superior performance on vector computers. The focus, of course, is on Computational Fluid Dynamics. During the last two decades CFD gained an important position together with experiments in wind tunnels and analytical methods. The main objective of CFD is to simulate dynamic flow fields through the numerical solution of the governing equations, e.g., the Navier-Stokes equations, using high-speed computers. The simulation of 2-D inviscid and viscous flows on vector computers does not represent any difficulties with respect to memory requirements or computation time. In 3-D, however, one has to compute some 20 to 30 variables per mesh point in a 3-D field per time-step or iteration such as the velocity components, density, pressure, enthalpy, temperature, concentrations, dissipative fluxes, local time steps, geometry coefficients, dummy arrays, etc. Computations in the case of 3-D are therefore restricted to fairly coarse meshes as well as to solutions which are often not fully converged solutions. The large amount of CPU time involved and the fact that the data cannot be contained in central memory are the main reasons for the long elapsed times for CFD applications. In these cases, the mapping of the problem onto the architecture of the machine and in particular onto special organizations of the memory must be fully considered to take full advantage of the vector computer.

  17. Dense LU Factorization on Multicore Supercomputer Nodes

    SciTech Connect

    Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V

    2012-01-01

    Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block

  18. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    SciTech Connect

    Geveci, Berk; Fabian, Nathan; Marion, Patrick; Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  19. Division Vi: Interstellar Matter

    NASA Astrophysics Data System (ADS)

    Millar, Tom; Chu, You-Hua; Dyson, John; Breitschwerdt, Dieter; Burton, Mike; Cabrit, Sylvie; Caselli, Paola; de Gouveia Dal Pino, Elisabete; Ferland, Gary; Juvela, Mika; Koo, Bon-Chul; Kwok, Sun; Lizano, Susana; Rozyczka, Michal; Tóth, Viktor; Tsuboi, Masato; Yang, Ji

    2010-05-01

    The business meeting of Division VI was held on Monday 10 October 2009. Apologies had been received in advance from D Breitschwerdt, P Caselli, G Ferland, M Juvela, S Lizano, M Rozyczka, V Tóth, M Tsuboi, J Yang and B-C Koo.

  20. The future of finite element applications on massively parallel supercomputers

    SciTech Connect

    Christon, M.

    1994-07-05

    The current focus in large scale scientific computing is upon parallel supercomputers. While still relatively unproven, these machines are being slated for production-oriented, general purpose supercomputing applications. The promise, of course, is to use massively parallel computers to venture further into scientific realisms by performing computations with anywhere from 10{sup 6} to 10{sup 9} grid points thereby, in principle, obtaining a deeper understanding of physical processes. In approaching this brave new world of computing with finite element applications, many technical issues become apparent. This paper attempts to reveal some of the applications-oriented issues which are facing code developers and ultimately the users of engineering and scientific applications on parallel supercomputers, but which seem to be remaining unanswered by vendors, researchers and centralized computing facilities. At risk is the fundamental way in which analysis is performed in a production sense, and the insight into physical problems which results. while at first this treatise may seem to advocate traditional register-to-register vector supercomputers, the goal of this paper is simply an attempt to point out what is missing from the massively parallel computing picture not only for production finite element applications, but also for grand challenge problems. the limiting issues for the use of FEM applications on parallel supercomputers are centered about the need for adequate disk space, archival storage, high bandwidth networks, and continued software development for mesh generation, scientific visualization, linear equation solvers and parallel input/output.

  1. An integrated distributed processing interface for supercomputers and workstations

    SciTech Connect

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  2. Large-Scale Graph Processing Analysis using Supercomputer Cluster

    NASA Astrophysics Data System (ADS)

    Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih

    2017-01-01

    Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.

  3. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    SciTech Connect

    Berkbigler, K. P.; Bush, B. W.; Davis, Kei,; Hoisie, A.; Smith, S. A.

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  4. Two wavelength division multiplexing WAN trials

    SciTech Connect

    Lennon, W.J.; Thombley, R.L.

    1995-01-20

    Lawrence Livermore National Laboratory, as a super-user, supercomputer, and super-application site, is anticipating the future bandwidth and protocol requirements necessary to connect to other such sites as well as to connect to remote-sited control centers and experiments. In this paper the authors discuss their vision of the future of Wide Area Networking, describe the plans for a wavelength division multiplexed link connecting Livermore with the University of California at Berkeley and describe plans for a transparent, {approx} 10 Gb/s ring around San Francisco Bay.

  5. The Sky's the Limit When Super Students Meet Supercomputers.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  6. Access to Supercomputers. Higher Education Panel Report 69.

    ERIC Educational Resources Information Center

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  7. When supercomputers go over to the dark side

    NASA Astrophysics Data System (ADS)

    White, Martin; Scott, Pat

    2017-03-01

    Despite oodles of data and plenty of theories, we still don't know what dark matter is. Martin White and Pat Scott describe how a new software tool called GAMBIT – run on supercomputers such as Prometheus – will test how novel theories stack up when confronted with real data.

  8. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    ScienceCinema

    Zgurskaya, Helen; Smith, Jeremy

    2016-11-23

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  9. QCD on the BlueGene/L Supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  10. Recent results from the Swinburne supercomputer software correlator

    NASA Astrophysics Data System (ADS)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  11. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    SciTech Connect

    Zgurskaya, Helen; Smith, Jeremy

    2016-11-17

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  12. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  13. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  14. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  15. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  16. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  17. Storage needs in future supercomputer environments

    NASA Technical Reports Server (NTRS)

    Coleman, Sam

    1991-01-01

    As the computing capacity of the Lawrence Livermore National Laboratory (LLNL) systems is increased and new applications are developed, the need for archival capacity will increase. The hardware and software architectures that will be needed to support advanced applications are discussed. Viewgraphs are included.

  18. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    SciTech Connect

    De, K; Jha, S; Maeno, T; Mashinistov, R.; Nilsson, P; Novikov, A.; Oleynik, D; Panitkin, S; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tsulaia, V.; Velikhov, V.; Wen, G.; Wells, Jack C; Wenaus, T

    2016-01-01

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of

  19. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  20. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  1. Division Overview

    NASA Technical Reports Server (NTRS)

    Emerson, Dawn

    2016-01-01

    This presentation provides an overview of the research and engineering in the competency fields of advanced communications and intelligent systems with emphasis on advanced technologies, architecture definition and system development for application in current and future aeronautics and space systems.

  2. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    NASA Astrophysics Data System (ADS)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  3. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  4. The Application of Ground-Penetrating Radar to Transportation Engineering: Recent Advances and New Perspectives (GI Division Outstanding ECS Award Lecture)

    NASA Astrophysics Data System (ADS)

    Tosti, Fabio; Benedetto, Andrea; Pajewski, Lara; Alani, Amir M.

    2017-04-01

    aims at presenting the recent advances and the new perspectives in the application of GPR to transportation engineering. This study reports on new experimental-based and theoretical models for the assessment of the physical (i.e., clay and water content in subgrade soils, railway ballast fouling) and the mechanical (i.e., the Young's modulus of elasticity) properties that are critical in maintaining the structural stability and the bearing capacity of the major transport infrastructures, such as highways, railways and airfields. With regard to the physical parameters, the electromagnetic behaviour related to the clay content in the load-bearing layers of flexible pavements as well as in subgrade soils has been analysed and modelled in both dry and wet conditions. Furthermore, it is discussed a new simulation-based methodology for the detection of the fouling content in railway ballast. Concerning the mechanical parameters, experimental based methods are presented for the assessment of the strength and deformation properties of the soils and the top-bounded layers of flexible pavements. Furthermore, unique case studies in terms of the methodology proposed, the survey planning and the site procedures in rather complex operations, are discussed in the case of bridges and tunnels inspections. Acknowledgements The Authors are grateful to the GI Division President Dr. Francesco Soldovieri and the relevant Award Committee in the context of the "GI Division Outstanding Early Career Scientists Award" of the European Geosciences Union. We also acknowledge the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" for providing networking and discussion opportunities throughout its activity and operation as well as facilitating prospect for publishing research outputs.

  5. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  6. New Mexico High School supercomputer challenge

    SciTech Connect

    Cohen, M.; Foster, M.; Kratzer, D.; Malone, P.; Solem, A.

    1991-01-01

    The national need for well trained scientists and engineers is more urgent today than ever before. Scientists who are trained in advanced computational techniques and have experience with multidisciplinary scientific collaboration are needed for both research and commercial applications if the United States is to maintain its productivity and technical edge in the world market. Many capable high school students, however, lose interest in pursuing scientific academic subjects or in considering science or engineering as a possible career. An academic contest that progresses from a state-sponsored program to a national competition is a way of developing science and computing knowledge among high school students and teachers as well as instilling enthusiasm for science. This paper describes an academic-year long program for high school students in New Mexico. The unique features, method, and evaluation of the program are discussed.

  7. Application of supercomputers to computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.

    1984-01-01

    Computers are playing an increasingly important role in the field of aerodynamics such that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. Example results obtained from the successively refined forms of the governing equations are discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to problems of practical importance. Finally, the Numerical Aerodynamic Simulation (NAS) Program - with its 1988 target of achieving a sustained computational rate of 1 billion floating point operations per second and operating with a memory of 240 million words - is discussed in terms of its goals and its projected effect on the future of computational aerodynamics.

  8. Finite element methods on supercomputers - The scatter-problem

    NASA Technical Reports Server (NTRS)

    Loehner, R.; Morgan, K.

    1985-01-01

    Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.

  9. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  10. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  11. Study of ATLAS TRT performance with GRID and supercomputers

    NASA Astrophysics Data System (ADS)

    Krasnopevtsev, D. V.; Klimentov, A. A.; Mashinistov, R. Yu.; Belyaev, N. L.; Ryabinkin, E. A.

    2016-09-01

    One of the most important studies dedicated to be solved for ATLAS physical analysis is a reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. Paper includes Transition Radiation Tracker performance results obtained with the usage of the ATLAS GRID and Kurchatov Institute's Data Processing Center including Tier-1 grid site and supercomputer as well as analysis of CPU efficiency during these studies.

  12. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  13. A color graphics environment in support of supercomputer systems

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, R.

    1985-01-01

    An initial step in the integration of an upgrade of a VPS-32 supercomputer to 16 million 64-bit words, to be closely followed by a further upgrade to 32 million words, was to develop a graphics language commonality with other computers at the Langley Center. The power of the upgraded supercomputer is to users at individual workstations, who will aid in defining the direction for future expansions in both graphics software and workstation requirements for the supercomputers. The LAN used is an ETHERNET configuration featuring both CYBER mainframe and PDP 11/34 image generator computers. The system includes a film recorder for image production in slide, CRT, 16 mm film, 35 mm film or polaroid film images. The workstations have screen resolutions of 1024 x 1024 with each pixel being one of 256 colors selected from a palette of 16 million colors. Each screen can have up to 8 windows open at a time, and is driven by a MC68000 microprocessor drawing on 4.5 Mb RAM, a 40 Mb hard disk and two floppy drives. Input is from a keyboard, digitizer pad, joystick or light pen. The system now allows researchers to view computed results in video time before printing out selected data.

  14. Extracting the Textual and Temporal Structure of Supercomputing Logs

    SciTech Connect

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  15. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  16. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  17. Large-scale integrated super-computing platform for next generation virtual drug discovery.

    PubMed

    Mitchell, Wayne; Matsumoto, Shunji

    2011-08-01

    Traditional drug discovery starts by experimentally screening chemical libraries to find hit compounds that bind to protein targets, modulating their activity. Subsequent rounds of iterative chemical derivitization and rescreening are conducted to enhance the potency, selectivity, and pharmacological properties of hit compounds. Although computational docking of ligands to targets has been used to augment the empirical discovery process, its historical effectiveness has been limited because of the poor correlation of ligand dock scores and experimentally determined binding constants. Recent progress in super-computing, coupled to theoretical insights, allows the calculation of the Gibbs free energy, and therefore accurate binding constants, for usually large ligand-receptor systems. This advance extends the potential of virtual drug discovery. A specific embodiment of the technology, integrating de novo, abstract fragment based drug design, sophisticated molecular simulation, and the ability to calculate thermodynamic binding constants with unprecedented accuracy, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Positive Factors Influencing the Advancement of Women to the Role of Head Athletic Trainer in the National Collegiate Athletic Association Divisions II and III

    PubMed Central

    Mazerolle, Stephanie M.; Eason, Christianne M.

    2016-01-01

    Context:  Research suggests that women do not pursue leadership positions in athletic training due to a variety of reasons, including family challenges, organizational constraints, and reluctance to hold the position. The literature has been focused on the National Collegiate Athletic Association Division I setting, limiting our full understanding. Objective:  To examine factors that help women as they worked toward the position of head athletic trainer. Design:  Qualitative study. Setting:  Divisions II and III. Patients or Other Participants:  Seventy-seven women who were employed as head athletic trainers at the Division II or III level participated in our study. Participants were 38 ± 9 (range = 24−57) years old and had an average of 14 ± 8 (range = 1−33) years of athletic training experience. Data Collection and Analysis:  We conducted online interviews. Participants journaled their reflections to a series of open-ended questions pertaining to their experiences as head athletic trainers. Data were analyzed using a general inductive approach. Credibility was secured by peer review and researcher triangulation. Results:  Three organizational facilitators emerged from the data, workplace atmosphere, mentors, and past work experiences. These organizational factors were directly tied to aspects within the athletic trainer's employment setting that allowed her to enter the role. One individual-level facilitator was found: personal attributes that were described as helpful for women in transitioning to the role of the head athletic trainer. Participants discussed being leaders and persisting toward their career goals. Conclusions:  Women working in Divisions II and III experience similar facilitators to assuming the role of head athletic trainer as those working in the Division I setting. Divisions II and III were viewed as more favorable for women seeking the role of head athletic trainer, but like those in the role in the Division I setting

  19. Positive Factors Influencing the Advancement of Women to the Role of Head Athletic Trainer in the National Collegiate Athletic Association Divisions II and III.

    PubMed

    Mazerolle, Stephanie M; Eason, Christianne M

    2016-07-01

    Research suggests that women do not pursue leadership positions in athletic training due to a variety of reasons, including family challenges, organizational constraints, and reluctance to hold the position. The literature has been focused on the National Collegiate Athletic Association Division I setting, limiting our full understanding. To examine factors that help women as they worked toward the position of head athletic trainer. Qualitative study. Divisions II and III. Seventy-seven women who were employed as head athletic trainers at the Division II or III level participated in our study. Participants were 38 ± 9 (range = 24-57) years old and had an average of 14 ± 8 (range = 1-33) years of athletic training experience. We conducted online interviews. Participants journaled their reflections to a series of open-ended questions pertaining to their experiences as head athletic trainers. Data were analyzed using a general inductive approach. Credibility was secured by peer review and researcher triangulation. Three organizational facilitators emerged from the data, workplace atmosphere, mentors, and past work experiences. These organizational factors were directly tied to aspects within the athletic trainer's employment setting that allowed her to enter the role. One individual-level facilitator was found: personal attributes that were described as helpful for women in transitioning to the role of the head athletic trainer. Participants discussed being leaders and persisting toward their career goals. Women working in Divisions II and III experience similar facilitators to assuming the role of head athletic trainer as those working in the Division I setting. Divisions II and III were viewed as more favorable for women seeking the role of head athletic trainer, but like those in the role in the Division I setting, women must have leadership skills.

  20. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier

  1. Structures Division

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1995 are presented.

  2. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers.

    PubMed

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  3. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  4. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  5. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  6. Opportunities for leveraging OS virtualization in high-end supercomputing.

    SciTech Connect

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  7. New Mexico Supercomputing Challenge 1993 evaluation report. Progress report

    SciTech Connect

    Trainor, M.; Eker, P.; Kratzer, D.; Foster, M.; Anderson, M.

    1993-11-01

    This report provides the evaluation of the third year (1993) of the New Mexico High School Supercomputing Challenge. It includes data to determine whether we met the program objectives, measures participation, and compares progress from the first to the third years. This year`s report is a more complete assessment than last year`s, providing both formative and summative evaluation data. Data indicates that the 1993 Challenge significantly changed many students` career plans and attitudes toward science, provided professional development for teachers, and caused some changes in computer offerings in several participating schools.

  8. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  9. Supercomputer predictive modeling for ensuring space flight safety

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Smirnov, N. N.; Nikitin, V. F.

    2015-04-01

    Development of new types of rocket engines, as well as upgrading the existing engines needs computer aided design and mathematical tools for supercomputer modeling of all basic processes of mixing, ignition, combustion and outflow through the nozzle. Even small upgrades and changes introduced in existing rocket engines without proper simulations cause severe accidents at launch places witnessed recently. The paper presents the results of computer code developing, verification and validation, making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  10. Chemical Technology Division annual technical report 1997

    SciTech Connect

    1998-06-01

    The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials and electrified interfaces. In addition, the Division operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division`s activities during 1997 are presented.

  11. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  12. Long Divisions

    NASA Image and Video Library

    2016-08-08

    The shadow of Saturn on the rings, which stretched across all of the rings earlier in Cassini's mission (see PIA08362), now barely makes it past the Cassini division. The changing length of the shadow marks the passing of the seasons on Saturn. As the planet nears its northern-hemisphere solstice in May 2017, the shadow will get even shorter. At solstice, the shadow's edge will be about 28,000 miles (45,000 kilometers) from the planet's surface, barely making it past the middle of the B ring. The moon Mimas is a few pixels wide, near the lower left in this image. This view looks toward the sunlit side of the rings from about 35 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft wide-angle camera on May 21, 2016. The view was obtained at a distance of approximately 2.0 million miles (3.2 million kilometers) from Saturn. Image scale is 120 miles (190 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20494

  13. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  14. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  15. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  16. A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery

    NASA Technical Reports Server (NTRS)

    Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.

    2000-01-01

    The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.

  17. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  18. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    PubMed

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.

  19. Rekindle the Fire: Building Supercomputers to Solve Dynamic Problems

    SciTech Connect

    Studham, Scott S. )

    2004-02-16

    Seymour Cray had a Lets go to the moon attitude when it came to building high-performance computers. His drive was to create architectures designed to solve the most challenging problems. Modern high-performance computer architects, however, seem to be focusing on building the largest floating-point-generation machines by using truckloads of commodity parts. Don't get me wrong; current clusters can solve a class of problems that are untouchable by any other system in the world, including the supercomputers of yesteryear. Many of the worlds fastest clusters provide new insights into weather forecasting, our understanding of fundamental sciences and provide the ability to model our nuclear stockpiles. Lets call this class of problem a first-principles simulation because the simulations are based on a fundamental physical understanding or model.

  20. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  1. Modeling the weather with a data flow supercomputer

    NASA Technical Reports Server (NTRS)

    Dennis, J. B.; Gao, G.-R.; Todd, K. W.

    1984-01-01

    A static concept of data flow architecture is considered for a supercomputer for weather modeling. The machine level instructions are loaded into specific memory locations before computation is initiated, with only one instruction active at a time. The machine would have processing element, functional unit, array memory, memory routing and distribution routing network elements all contained on microprocessors. A value-oriented algorithmic language (VAL) would be employed and would have, as basic operations, simple functions deriving results from operand values. Details of the machine language format, computations with an array and file processing procedures are outlined. A global weather model is discussed in terms of a static architecture and the potential computation rate is analyzed. The results indicate that detailed design studies are warranted to quantify costs and parts fabrication requirements.

  2. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  3. An analysis of file migration in a UNIX supercomputing environment

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1992-01-01

    The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.

  4. Vectorized program architectures for supercomputer-aided circuit design

    SciTech Connect

    Rizzoli, V.; Ferlito, M.; Neri, A.

    1986-01-01

    Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. This could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.

  5. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  6. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  7. Solving global shallow water equations on heterogeneous supercomputers

    PubMed Central

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428

  8. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  9. Solving global shallow water equations on heterogeneous supercomputers.

    PubMed

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved.

  10. 1998 Chemical Technology Division Annual Technical Report.

    SciTech Connect

    Ackerman, J.P.; Einziger, R.E.; Gay, E.C.; Green, D.W.; Miller, J.F.

    1999-08-06

    The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials. In addition, the Division operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division's activities during 1998 are presented.

  11. Accelerator Technology Division

    NASA Astrophysics Data System (ADS)

    1992-04-01

    In fiscal year (FY) 1991, the Accelerator Technology (AT) division continued fulfilling its mission to pursue accelerator science and technology and to develop new accelerator concepts for application to research, defense, energy, industry, and other areas of national interest. This report discusses the following programs: The Ground Test Accelerator Program; APLE Free-Electron Laser Program; Accelerator Transmutation of Waste; JAERI, OMEGA Project, and Intense Neutron Source for Materials Testing; Advanced Free-Electron Laser Initiative; Superconducting Super Collider; The High-Power Microwave Program; (Phi) Factory Collaboration; Neutral Particle Beam Power System Highlights; Accelerator Physics and Special Projects; Magnetic Optics and Beam Diagnostics; Accelerator Design and Engineering; Radio-Frequency Technology; Free-Electron Laser Technology; Accelerator Controls and Automation; Very High-Power Microwave Sources and Effects; and GTA Installation, Commissioning, and Operations.

  12. Group-based variant calling leveraging next-generation supercomputing for large-scale whole-genome sequencing studies.

    PubMed

    Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J

    2015-09-22

    Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.

  13. An inter-realm, cyber-security infrastructure for virtual supercomputing

    SciTech Connect

    Al-Muhtadi, J.; Feng, W. C.; Fisk, M. E.

    2001-01-01

    Virtual supercomputing, (ise ., high-performance grid computing), is poised to revolutionize the way we think about and use computing. However, the security of the links interconnecting the nodes within such an environment will be its Achilles heel, particularly when secure communication is required to tunnel through heterogeneous domains. In this paper we examine existing security mechanisms, show their inadequacy, and design a comprehensive cybersecurity infrastructure that meets the security requirements of virtual supercomputing. Keywords Security, virtual supercomputing, grid computing, high-performance computing, GSS-API, SSL, IPsec, component-based software, dynamic reconfiguration.

  14. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  15. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  16. Supercomputers Join the Fight against Cancer – U.S. Department of Energy

    SciTech Connect

    2016-06-29

    The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.

  17. Requirements for supercomputing in energy research: The transition to massively parallel computing

    SciTech Connect

    Not Available

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  18. Advanced Silicon Photonic Transceivers - the Case of a Wavelength Division and Polarization Multiplexed Quadrature Phase Shift Keying Receiver for Terabit/s Optical Transmission

    DTIC Science & Technology

    2017-03-10

    formats by the co- integration of a passive 90 degree optical hybrid, highspeed balanced Ge photodetectors and a high-speed two-channel transimpedance...40 Gbaud and can handle advanced modulation formats by the co-integration of a passive 90 degree optical hybrid, high- speed balanced Ge...polarization grating couplers, a 2 by 4 multi-mode interferometer (2x4-MMI) acting as a 90° hybrid, and 2 pair of balanced germanium photodiodes (Ge PDs

  19. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  20. Supercomputers ready for use as discovery machines for neuroscience.

    PubMed

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  1. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    PubMed Central

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle. PMID:9449358

  2. Supercomputers Ready for Use as Discovery Machines for Neuroscience

    PubMed Central

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998

  3. ASC Supercomputers Predict Effects of Aging on Materials

    SciTech Connect

    Kubota, A; Reisman, D B; Wolfer, W G

    2005-08-25

    In an extensive molecular dynamics (MD) study of shock compression of aluminum containing such microscopic defects as found in aged plutonium, LLNL scientists have demonstrated that ASC supercomputers live up to their promise as powerful tools to predict aging phenomena in the nuclear stockpile. Although these MD investigations are carried out on material samples containing only about 10 to 40 million atoms, and being not much bigger than a virus particle, they have shown that reliable materials properties and relationships between them can be extracted for density, temperature, pressure, and dynamic strength. This was proven by comparing their predictions with experimental data of the Hugoniot, with dynamic strength inferred from gas-gun experiments, and with the temperatures behind the shock front as calculated with hydro-codes. The effects of microscopic helium bubbles and of radiation-induced dislocation loops and voids on the equation of state were also determined and found to be small and in agreement with earlier theoretical predictions and recent diamond-anvil-cell experiments. However, these microscopic defects play an essential role in correctly predicting the dynamic strength for these nano-crystalline samples. These simulations also prove that the physics involved in shock compression experiments remains the same for macroscopic specimens used in gas-gun experiments down to micrometer samples to be employed in future NIF experiments. Furthermore, a practical way was discovered to reduce plastic instabilities in NIF target materials by introducing finely dispersed defects.

  4. High Resolution Aerospace Applications using the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha

    2005-01-01

    This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

  5. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  6. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  7. Numerical infinities and infinitesimals in a new supercomputing framework

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.

    2016-06-01

    Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.

  8. Adventures in supercomputing: An innovative program for high school teachers

    SciTech Connect

    Oliver, C.E.; Hicks, H.R.; Summers, B.G.; Staten, D.G.

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  9. Maintenance and Upgrading of the Richmond Physics Supercomputing Cluster

    NASA Astrophysics Data System (ADS)

    Davda, Vikash

    2003-10-01

    The supercomputing cluster in Physics has been upgraded. It supports nuclear physics research at Jefferson Lab, which focuses on probing the quark-gluon structure of atomic nuclei. We added new slave nodes, increased storage, raised a firewall, and documented the e-mail archive relating to the cluster. The three new slave nodes were physically mounted and configured to join the cluster. A RAID for extra storage was moved from a prototype cluster and configured for this cluster. A firewall was implemented to enhance security using a separate node from the prototype cluster. The software Firewall Builder was used to set communication rules. Documentation consists primarily of e-mails exchanged with the vendor. We wanted web-based, searchable documentation. We used SWISH-E, non-proprietary indexing software designed to search through file collections such as e-mails. SWISH-E works by first creating an index. A built-in module then sets up a Perl interface for the user to define the search; the files in the index are then sorted.

  10. Supercomputing for the parallelization of whole genome analysis

    PubMed Central

    Puckelwartz, Megan J.; Pesce, Lorenzo L.; Nelakuditi, Viswateja; Dellefave-Castillo, Lisa; Golbus, Jessica R.; Day, Sharlene M.; Cappola, Thomas P.; Dorn, Gerald W.; Foster, Ian T.; McNally, Elizabeth M.

    2014-01-01

    Motivation: The declining cost of generating DNA sequence is promoting an increase in whole genome sequencing, especially as applied to the human genome. Whole genome analysis requires the alignment and comparison of raw sequence data, and results in a computational bottleneck because of limited ability to analyze multiple genomes simultaneously. Results: We now adapted a Cray XE6 supercomputer to achieve the parallelization required for concurrent multiple genome analysis. This approach not only markedly speeds computational time but also results in increased usable sequence per genome. Relying on publically available software, the Cray XE6 has the capacity to align and call variants on 240 whole genomes in ∼50 h. Multisample variant calling is also accelerated. Availability and implementation: The MegaSeq workflow is designed to harness the size and memory of the Cray XE6, housed at Argonne National Laboratory, for whole genome analysis in a platform designed to better match current and emerging sequencing volume. Contact: emcnally@uchicago.edu PMID:24526712

  11. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    PubMed Central

    Asif, Rameez

    2016-01-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to −11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC). PMID:27270381

  12. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    NASA Astrophysics Data System (ADS)

    Asif, Rameez

    2016-06-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to ‑11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC).

  13. Detailing the Division

    NASA Image and Video Library

    2010-10-11

    NASA Cassini spacecraft looks between Saturn A and B rings to spy structure in the Cassini Division. The Cassini Division, occupying the middle and left of the image, contains five dim bands of ring material, but not all of the division is shown here.

  14. Implementation of orthogonal frequency division multiplexing (OFDM) and advanced signal processing for elastic optical networking in accordance with networking and transmission constraints

    NASA Astrophysics Data System (ADS)

    Johnson, Stanley

    An increasing adoption of digital signal processing (DSP) in optical fiber telecommunication has brought to the fore several interesting DSP enabled modulation formats. One such format is orthogonal frequency division multiplexing (OFDM), which has seen great success in wireless and wired RF applications, and is being actively investigated by several research groups for use in optical fiber telecom. In this dissertation, I present three implementations of OFDM for elastic optical networking and distributed network control. The first is a field programmable gate array (FPGA) based real-time implementation of a version of OFDM conventionally known as intensity modulation and direct detection (IMDD) OFDM. I experimentally demonstrate the ability of this transmission system to dynamically adjust bandwidth and modulation format to meet networking constraints in an automated manner. To the best of my knowledge, this is the first real-time software defined networking (SDN) based control of an OFDM system. In the second OFDM implementation, I experimentally demonstrate a novel OFDM transmission scheme that supports both direct detection and coherent detection receivers simultaneously using the same OFDM transmitter. This interchangeable receiver solution enables a trade-off between bit rate and equipment cost in network deployment and upgrades. I show that the proposed transmission scheme can provide a receiver sensitivity improvement of up to 1.73 dB as compared to IMDD OFDM. I also present two novel polarization analyzer based detection schemes, and study their performance using experiment and simulation. In the third implementation, I present an OFDM pilot-tone based scheme for distributed network control. The first instance of an SDN-based OFDM elastic optical network with pilot-tone assisted distributed control is demonstrated. An improvement in spectral efficiency and a fast reconfiguration time of 30 ms have been achieved in this experiment. Finally, I

  15. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  16. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  17. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  18. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  19. Chemical Sciences Division annual report 1994

    SciTech Connect

    1995-06-01

    The division is one of ten LBL research divisions. It is composed of individual research groups organized into 5 scientific areas: chemical physics, inorganic/organometallic chemistry, actinide chemistry, atomic physics, and chemical engineering. Studies include structure and reactivity of critical reaction intermediates, transients and dynamics of elementary chemical reactions, and heterogeneous and homogeneous catalysis. Work for others included studies of superconducting properties of high-{Tc} oxides. In FY 1994, the division neared completion of two end-stations and a beamline for the Advanced Light Source, which will be used for combustion and other studies. This document presents summaries of the studies.

  20. Formative cell divisions: principal determinants of plant morphogenesis.

    PubMed

    Smolarkiewicz, Michalina; Dhonukshe, Pankaj

    2013-03-01

    Formative cell divisions utilizing precise rotations of cell division planes generate and spatially place asymmetric daughters to produce different cell layers. Therefore, by shaping tissues and organs, formative cell divisions dictate multicellular morphogenesis. In animal formative cell divisions, the orientation of the mitotic spindle and cell division planes relies on intrinsic and extrinsic cortical polarity cues. Plants lack known key players from animals, and cell division planes are determined prior to the mitotic spindle stage. Therefore, it appears that plants have evolved specialized mechanisms to execute formative cell divisions. Despite their profound influence on plant architecture, molecular players and cellular mechanisms regulating formative divisions in plants are not well understood. This is because formative cell divisions in plants have been difficult to track owing to their submerged positions and imprecise timings of occurrence. However, by identifying a spatiotemporally inducible cell division plane switch system applicable for advanced microscopy techniques, recent studies have begun to uncover molecular modules and mechanisms for formative cell divisions. The identified molecular modules comprise developmentally triggered transcriptional cascades feeding onto microtubule regulators that now allow dissection of the hierarchy of the events at better spatiotemporal resolutions. Here, we survey the current advances in understanding of formative cell divisions in plants in the context of embryogenesis, stem cell functionality and post-embryonic organ formation.

  1. Division Chief Meeting, April, 1929

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Caption: 'LMAL division chiefs confer with the engineer-in-charge in April 1929. Left to right: E.A. Myers, Personnel Division; Edward R. Sharp, Property and Clerical Division; Thomas Carroll, Flight Test Division; Henry J.E. Reid, engineer in chief; Carlton Kemper, Power Plants Division; Elton Miller, aerodynamics division.'

  2. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  3. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  4. Physics division annual report 2000.

    SciTech Connect

    Thayer, K., ed.

    2001-10-04

    impacts the structure of nuclei and extended the exquisite sensitivity of the Atom-Trap-Trace-Analysis technique to new species and applications. All of this progress was built on advances in nuclear theory, which the Division pursues at the quark, hadron, and nuclear collective degrees of freedom levels. These are just a few of the highlights in the Division's research program. The results reflect the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  5. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms

  6. Data mining method for anomaly detection in the supercomputer task flow

    NASA Astrophysics Data System (ADS)

    Voevodin, Vadim; Voevodin, Vladimir; Shaikhislamov, Denis; Nikitenko, Dmitry

    2016-10-01

    The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer's task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes - normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the "Lomonosov" supercomputer.

  7. Division: The Sleeping Dragon

    ERIC Educational Resources Information Center

    Watson, Anne

    2012-01-01

    Of the four mathematical operators, division seems to not sit easily for many learners. Division is often described as "the odd one out". Pupils develop coping strategies that enable them to "get away with it". So, problems, misunderstandings, and misconceptions go unresolved perhaps for a lifetime. Why is this? Is it a case of "out of sight out…

  8. Division: The Sleeping Dragon

    ERIC Educational Resources Information Center

    Watson, Anne

    2012-01-01

    Of the four mathematical operators, division seems to not sit easily for many learners. Division is often described as "the odd one out". Pupils develop coping strategies that enable them to "get away with it". So, problems, misunderstandings, and misconceptions go unresolved perhaps for a lifetime. Why is this? Is it a case of "out of sight out…

  9. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.

  10. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    SciTech Connect

    De, K; Jha, S; Klimentov, A; Maeno, T; Nilsson, P; Oleynik, D; Panitkin, S; Wells, Jack C; Wenaus, T

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  11. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  12. Chemical Technology Division annual technical report, 1996

    SciTech Connect

    1997-06-01

    CMT is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. It conducts R&D in 3 general areas: development of advanced power sources for stationary and transportation applications and for consumer electronics, management of high-level and low-level nuclear wastes and hazardous wastes, and electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, materials chemistry of electrified interfaces and molecular sieves, and the theory of materials properties. It also operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at ANL and other organizations. Technical highlights of the Division`s activities during 1996 are presented.

  13. A report documenting the completion of the Los Alamos National Laboratory portion of the ASC level II milestone ""Visualization on the supercomputing platform

    SciTech Connect

    Ahrens, James P; Patchett, John M; Lo, Li - Ta; Mitchell, Christopher; Mr Marle, David; Brownlee, Carson

    2011-01-24

    This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider using CPU

  14. Physics Division activities report, 1986--1987

    SciTech Connect

    Not Available

    1987-01-01

    This report summarizes the research activities of the Physics Division for the years 1986 and 1987. Areas of research discussed in this paper are: research on e/sup +/e/sup /minus// interactions; research on p/bar p/ interactions; experiment at TRIUMF; double beta decay; high energy astrophysics; interdisciplinary research; and advanced technology development and the SSC.

  15. National Mapping Division Strategic Direction 1997

    USGS Publications Warehouse

    ,

    1997-01-01

    The mission of the U.S. Geological Survey's (USGS) National Mapping Division (NMD) is to meet the Nation's need for basic geopspatial data, ensuring access to and advancing the application of these data and other related earth science information for users worldwide.

  16. Division Iv: Stars

    NASA Astrophysics Data System (ADS)

    Corbally, Christopher; D'Antona, Francesca; Spite, Monique; Asplund, Martin; Charbonnel, Corinne; Docobo, Jose Angel; Gray, Richard O.; Piskunov, Nikolai E.

    2012-04-01

    This Division IV was started on a trial basis at the General Assembly in The Hague 1994 and was formally accepted at the Kyoto General Assembly in 1997. Its broad coverage of ``Stars'' is reflected in its relatively large number of Commissions and so of members (1266 in late 2011). Its kindred Division V, ``Variable Stars'', has the same history of its beginning. The thinking at the time was to achieve some kind of balance between the number of members in each of the 12 Divisions. Amid the current discussion of reorganizing the number of Divisions into a more compact form it seems advisable to make this numerical balance less of an issue than the rationalization of the scientific coverage of each Division, so providing more effective interaction within a particular field of astronomy. After all, every star is variable to a certain degree and such variability is becoming an ever more powerful tool to understand the characteristics of every kind of normal and peculiar star. So we may expect, after hearing the reactions of members, that in the restructuring a single Division will result from the current Divisions IV and V.

  17. High energy physics division semiannual report of research activities

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R. )

    1991-08-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1991--June 30, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  18. [Experience in simulating the structural and dynamic features of small proteins using table supercomputers].

    PubMed

    Kondrat'ev, M S; Kabanov, A V; Komarov, V M; Khechinashvili, N N; Samchenko, A A

    2011-01-01

    The results of theoretical studies of the structural and dynamic features of peptides and small proteins have been presented that were carried out by quantum chemical and molecular dynamics methods in high-performance graphic stations, "table supercomputers", using distributed calculations by the CUDA technology.

  19. The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    1997-01-01

    In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…

  20. NSF Says It Will Support Supercomputer Centers in California and Illinois.

    ERIC Educational Resources Information Center

    Strosnider, Kim; Young, Jeffrey R.

    1997-01-01

    The National Science Foundation will increase support for supercomputer centers at the University of California, San Diego and the University of Illinois, Urbana-Champaign, while leaving unclear the status of the program at Cornell University (New York) and a cooperative Carnegie-Mellon University (Pennsylvania) and University of Pittsburgh…

  1. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  2. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  3. The impact of the U.S. supercomputing initiative will be global

    SciTech Connect

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  4. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  5. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  6. Supercomputers. 1972-June, 1983 (Citations from the International Aerospace Abstracts Data Base)

    SciTech Connect

    Not Available

    1983-06-01

    This bibliography contains citations concerning algorithms, architecture, and technology of supercomputers. Processing algorithms and architecture, network reliability, and inference and problem solving mechanisms are discussed. Applications in laser techniques, nuclear devices, space technology, and engineering design and operations are included. (Contains 63 citations fully indexed and including a title list.)

  7. NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)

    ScienceCinema

    None

    2016-07-12

    NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.

  8. Oriented divisions, fate decisions

    PubMed Central

    Williams, Scott E.; Fuchs, Elaine

    2013-01-01

    During development, the establishment of proper tissue architecture depends upon the coordinated control of cell divisions not only in space and time, but also direction. Execution of an oriented cell division requires establishment of an axis of polarity and alignment of the mitotic spindle along this axis. Frequently, the cleavage plane also segregates fate determinants, either unequally or equally between daughter cells, the outcome of which is either an asymmetric or symmetric division, respectively. The last few years have witnessed tremendous growth in understanding both the extrinsic and intrinsic cues that position the mitotic spindle, the varied mechanisms in which the spindle orientation machinery is controlled in diverse organisms and organ systems, and the manner in which the division axis influences the signaling pathways that direct cell fate choices. PMID:24021274

  9. Chemical Engineering Division Activities

    ERIC Educational Resources Information Center

    Chemical Engineering Education, 1978

    1978-01-01

    The 1978 ASEE Chemical Engineering Division Lecturer was Theodore Vermeulen of the University of California at Berkeley. Other chemical engineers who received awards or special recognition at a recent ASEE annual conference are mentioned. (BB)

  10. Reconsidering Division Cavalry Squadrons

    DTIC Science & Technology

    2017-05-25

    battalions because they were “the central core of the reconnaissance team,” other cavalry champions, like Major General Robert Wagner , countered that they...29 Starry, Mounted Combat, 221; Robert Wagner , “Division Cavalry: The Broken Saber,” Armor (September-October 1989): 39; Thomas Tait...Cavalry Squadron of 2025.” Armor (January-March 2015): 67-71. Wagner , Robert. “Division Cavalry: The Broken Saber.” Armor (September-October 1989

  11. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    2001-01-01

    The Structures and Acoustics Division of the NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included in this report are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported is a synopsis of the work and accomplishments completed by the Division during the 1997, 1998, and 1999 calendar years. A bibliography containing 93 citations is provided.

  12. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    1999-01-01

    The Structures and Acoustics Division of NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported are a synopsis of the work and accomplishments reported by the Division during the 1996 calendar year. A bibliography containing 42 citations is provided.

  13. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  14. NATO Advanced Workshop on Supercomputing Held in Trondheim, Norway on 19-23 June 1989

    DTIC Science & Technology

    1989-10-27

    graphics The principal vendors of true graphics superworksta- languages tions are Apollo, Ardent, Silicon Graphics, and Stellar. AT&T and Pixar have...windows, 3- D); RENDERMAN ( PIXAR ), a proposed standard Issues In Scientific Visualization that separates rendering from geometry, and DORE B. McCormick

  15. Communications and Intelligent Systems Division - Division Overview

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2017-01-01

    This presentation provides an overview of the research and engineering work being performed in the competency fields of advanced communications and intelligent systems with emphasis on advanced technologies, architecture definition, and systems development for application in current and future aeronautics and space communications systems.

  16. Communications and Intelligent Systems Division - Division Overview

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2017-01-01

    This presentation provides an overview of the research and engineering work being performed in the competency fields of advanced communications and intelligent systems with emphasis on advanced technologies, architecture definition,and systems development for application in current and future aeronautics and space communications systems.

  17. Activities of the Solid State Division

    NASA Astrophysics Data System (ADS)

    Green, P. H.; Hinton, L. W.

    1994-08-01

    This report covers research progress in the Solid State Division from April 1, 1992, to September 30, 1993. During this period, the division conducted a broad, interdisciplinary materials research program with emphasis on theoretical solid state physics, neutron scattering, synthesis and characterization of materials, ion beam and laser processing, and the structure of solids and surfaces. This research effort was enhanced by new capabilities in atomic-scale materials characterization, new emphasis on the synthesis and processing of materials, and increased partnering with industry and universities. The theoretical effort included a broad range of analytical studies, as well as a new emphasis on numerical simulation stimulated by advances in high-performance computing and by strong interest in related division experimental programs. Superconductivity research continued to advance on a broad front from fundamental mechanisms of high-temperature superconductivity to the development of new materials and processing techniques. The Neutron Scattering Program was characterized by a strong scientific user program and growing diversity represented by new initiatives in complex fluids and residual stress. The national emphasis on materials synthesis and processing was mirrored in division research programs in thin-film processing, surface modification, and crystal growth. Research on advanced processing techniques such as laser ablation, ion implantation, and plasma processing was complemented by strong programs in the characterization of materials and surfaces including ultrahigh resolution scanning transmission electron microscopy, atomic-resolution chemical analysis, synchrotron x-ray research, and scanning tunneling microscopy.

  18. Website for the Space Science Division

    NASA Technical Reports Server (NTRS)

    Schilling, James; DeVincenzi, Donald (Technical Monitor)

    2002-01-01

    The Space Science Division at NASA Ames Research Center is dedicated to research in astrophysics, exobiology, advanced life support technologies, and planetary science. These research programs are structured around Astrobiology (the study of life in the universe and the chemical and physical forces and adaptions that influence life's origin, evolution, and destiny), and address some of the most fundamental questions pursued by science. These questions examine the origin of life and our place in the universe. Ames is recognized as a world leader in Astrobiology. In pursuing our mission in Astrobiology, Space Science Division scientists perform pioneering basic research and technology development.

  19. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  20. Division i: Fundamental Astronomy

    NASA Astrophysics Data System (ADS)

    McCarthy, Dennis D.; Klioner, Sergei A.; Vondrák, Jan; Evans, Dafydd Wyn; Hohenkerk, Catherine Y.; Hosokawa, Mizuhiko; Huang, Cheng-Li; Kaplan, George H.; Knežević, Zoran; Manchester, Richard N.; Morbidelli, Alessandro; Petit, Gérard; Schuh, Harald; Soffel, Michael H.; Zacharias, Norbert

    2012-04-01

    The goal of the division is to address the scientific issues that were developed at the 2009 IAU General Assembly in Rio de Janeiro. These are:•Astronomical constants-Gaussian gravitational constant, Astronomical Unit, GMSun, geodesic precession-nutation•Astronomical software•Solar System Ephemerides-Pulsar research-Comparison of dynamical reference frames•Future Optical Reference Frame•Future Radio Reference Frame•Exoplanets-Detection-Dynamics•Predictions of Earth orientation•Units of measurements for astronomical quantities in relativistic context•Astronomical units in the relativistic framework•Time-dependent ecliptic in the GCRS•Asteroid masses•Review of space missions•Detection of gravitational waves•VLBI on the Moon•Real time electronic access to UT1-UTCIn pursuit of these goals Division I members have made significant scientific and organizational progress, and are organizing a Joint Discussion on Space-Time Reference Systems for Future Research at the 2012 IAU General Assembly. The details of Division activities and references are provided in the individual Commission and Working Group reports in this volume. A comprehensive list of references related to the work of the Division is available at the IAU Division I website at http://maia.usno.navy.mil/iaudiv1/.

  1. Division II: Sun and Heliosphere

    NASA Astrophysics Data System (ADS)

    Webb, David F.; Melrose, Donald B.; Benz, Arnold O.; Bogdan, Thomas J.; Bougeret, Jean-Louis; Klimchuk, James A.; Martinez Pillet, Valentin

    2007-03-01

    Division II of the IAU provides a forum for astronomers studying a wide range of phenomena related to the structure, radiation and activity of the Sun, and its interaction with the Earth and the rest of the solar system. Division II encompasses three Commissions, 10, 12 and 49, and four working groups. During the last triennia the activities of the division involved some reorganization of the division and its working groups, developing new procedures for election of division and commission officers, promoting annual meetings from within the division and evaluating all the proposed meetings, evaluating the division's representatives for the IAU to international scientific organizations, and participating in general IAU business.

  2. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  3. Multiscale Hy3S: hybrid stochastic simulation for supercomputers.

    PubMed

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-02-24

    Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We

  4. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    PubMed Central

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-01-01

    Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems

  5. Multi-processing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    The MIMD concept is applied, through multitasking, with relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. An existing single processor algorithm is mapped without the need for developing a new algorithm. The procedure of designing a code utilizing this approach is automated with the Unix stream editor. A Multiple Processor Multiple Grid (MPMG) code is developed as a demonstration of this approach. This code solves the three-dimensional, Reynolds-averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. This solver is applied to a generic, oblique-wing aircraft problem on a four-processor computer using one process for data management and nonparallel computations and three processes for pseudotime advance on three different grid systems.

  6. Monte Carlo shell model studies with massively parallel supercomputers

    NASA Astrophysics Data System (ADS)

    Shimizu, Noritaka; Abe, Takashi; Honma, Michio; Otsuka, Takaharu; Togashi, Tomoaki; Tsunoda, Yusuke; Utsuno, Yutaka; Yoshida, Tooru

    2017-06-01

    We present an overview of the advanced Monte Carlo shell model (MCSM), including its recent applications to no-core shell-model calculations and to large-scale shell-model calculations (LSSM) in the usual sense. For the ab initio no-core MCSM we show recent methodological developments, which include the evaluation of energy eigenvalues in an infinitely large model space by an extrapolation method. As an example of the application of the no-core MCSM, the cluster structure of Be isotopes is discussed. Regarding LSSM applications, the triple shape coexistence in 68Ni and 70Ni and the shape transition of Zr isotopes are clarified with the visualization of the intrinsic deformation of the MCSM wave function. General aspects of the code development of the MCSM on massively parallel computers are also briefly described.

  7. (Super)computing within integrated e-Infrastructures

    NASA Astrophysics Data System (ADS)

    Pasian, F.

    Also in the case of astrophysics, the capability of performing ``Big Science'' requires the availability of advanced computing: large HPC facilities and a Grid infrastructure. But computational resources alone are far from being enough for the community. A whole set of e-infrastructures (network, computing nodes, data repositories, applications) need to interoperate smoothly. In this paper, after a survey of the situation in Europe and Italy in the fields of Grid and HPC, the data processing and simulations of the Planck mission are described as a practical example of the complexity of astrophysics applications. The need is evidenced for a complex e-Infrastructure for astrophysics, combining applications, data, computing, network within an integrated view, in which the Virtual Observatory is expected to bring in its own standards, tools and facilities.

  8. Hurricane Modeling and Supercomputing: Can a global mesoscale model be useful in improving forecasts of tropical cyclogenesis?

    NASA Astrophysics Data System (ADS)

    Shen, B.; Tao, W.; Atlas, R.

    2007-12-01

    Hurricane modeling, along with guidance from observations, has been used to help construct hurricane theories since the 1960s. CISK (conditional instability of the second kind, Charney and Eliassen 1964; Ooyama 1964,1969) and WISHE (wind-induced surface heat exchange, Emanuel 1986) are among the well-known theories being used to understand hurricane intensification. For hurricane genesis, observations have indicated the importance of large-scale flows (e.g., the Madden-Julian Oscillation or MJO, Maloney and Hartmann, 2000) on the modulation of hurricane activity. Recent modeling studies have focused on the role of the MJO and Rossby waves (e.g., Ferreira and Schubert, 1996; Aivyer and Molinari, 2003) and/or the interaction of small-scale vortices (e.g., Holland 1995; Simpson et al. 1997; Hendrick et al. 2004), of which determinism could be also built by large-scale flows. The aforementioned studies suggest a unified view on hurricane formation, consisting of multiscale processes such as scale transition (e.g., from the MJO to Equatorial Rossby Waves and from waves to vortices), and scale interactions among vortices, convection, and surface heat and moisture fluxes. To depict the processes in the unified view, a high-resolution global model is needed. During the past several years, supercomputers have enabled the deployment of ultra-high resolution global models, obtaining remarkable forecasts of hurricane track and intensity (Atlas et al. 2005; Shen et al. 2006). In this work, hurricane genesis is investigated with the aid of a global mesoscale model on the NASA Columbia supercomputer by conducting numerical experiments on the genesis of six consecutive tropical cyclones (TCs) in May 2002. These TCs include two pairs of twin TCs in the Indian Ocean, Supertyphoon Hagibis in the West Pacific Ocean and Hurricane Alma in the East Pacific Ocean. It is found that the model is capable of predicting the genesis of five of these TCs about two to three days in advance. Our

  9. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    NASA Astrophysics Data System (ADS)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  10. Improving the Availability of Supercomputer Job Input Data Using Temporal Replication

    SciTech Connect

    Wang, Chao; Zhang, Zhe; Ma, Xiaosong; Vazhkudai, Sudharshan S; Mueller, Frank

    2009-06-01

    Storage systems in supercomputers are a major reason for service interruptions. RAID solutions alone cannot provide sufficient protection as (1) growing average disk recovery times make RAID groups increasingly vulnerable to disk failures during reconstruction, and (2) RAID does not help with higher-level faults such failed I/O nodes. This paper presents a complementary approach based on the observation that files in the supercomputer scratch space are typically accessed by batch jobs whose execution can be anticipated. Therefore, we propose to transparently, selectively, and temporarily replicate 'active' job input data by coordinating the parallel file system with the batch job scheduler. We have implemented the temporal replication scheme in the popular Lustre parallel file system and evaluated it with real-cluster experiments. Our results show that the scheme allows for fast online data reconstruction, with a reasonably low overall space and I/O bandwidth overhead.

  11. 17th Edition of TOP500 List of World's Fastest SupercomputersReseased

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack J.; Simon,Horst D.

    2001-06-21

    17th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, GERMANY; KNOXVILLE, TENN.; BERKELEY, CALIF. In what has become a much-anticipated event in the world of high-performance computing, the 17th edition of the TOP500 list of the world's fastest supercomputers was released today (June 21). The latest edition of the twice-yearly ranking finds IBM as the leader in the field, with 40 percent in terms of installed systems and 43 percent in terms of total performance of all the installed systems. In second place in terms of installed systems is Sun Microsystems with 16 percent, while Cray Inc. retained second place in terms of performance (13 percent). SGI Inc. was third both with respect to systems with 63 (12.6 percent) and performance (10.2 percent).

  12. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  13. Nuclear Chemistry Division annual report FY83

    SciTech Connect

    Struble, G.

    1983-01-01

    The purpose of the annual reports of the Nuclear Chemistry Division is to provide a timely summary of research activities pursued by members of the Division during the preceding year. Throughout, details are kept to a minimum; readers desiring additional information are encouraged to read the referenced documents or contact the authors. The Introduction presents an overview of the Division's scientific and technical programs. Next is a section of short articles describing recent upgrades of the Division's major facilities, followed by sections highlighting scientific and technical advances. These are grouped under the following sections: nuclear explosives diagnostics; geochemistry and environmental sciences; safeguards technology and radiation effect; and supporting fundamental science. A brief overview introduces each section. Reports on research supported by a particular program are generally grouped together in the same section. The last section lists the scientific, administrative, and technical staff in the Division, along with visitors, consultants, and postdoctoral fellows. It also contains a list of recent publications and presentations. Some contributions to the annual report are classified and only their abstracts are included in this unclassified portion of the report (UCAR-10062-83/1); the full article appears in the classified portion (UCAR-10062-83/2).

  14. AIC Computations Using Navier-Stokes Equations on Single Image Supercomputers For Design Optimization

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru

    2004-01-01

    A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.

  15. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  16. Good Seeing: Best Practices for Sustainable Operations at the Air Force Maui Optical and Supercomputing Site

    DTIC Science & Technology

    2016-01-01

    There are 18 full-time employees , most of whom are technical staff. Operations are dedicated mostly to astronomical research , and outside agencies...Lisa Ruth Rand, Dave Baiocchi Good Seeing Best Practices for Sustainable Operations at the Air Force Maui Optical and Supercomputing Site C O R P O...reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org

  17. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    SciTech Connect

    Meneses, Esteban; Ni, Xiang; Jones, Terry R; Maxwell, Don E

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  18. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    SciTech Connect

    Muller, U.A.; Baumle, B.; Kohler, P.; Gunzinger, A.; Guggenbuhl, W.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  19. Tightly coupled'' simulation utilizing the EBR-II LMR: A real-time supercomputing and AI environment

    SciTech Connect

    Makowitz, H.; Barber, D.G.; Cordes, G.A.; Powers, A.K.; Scott, R. Jr.; Ward, L.W. ); Sackett, J.I.; King, R.W.; Lehto, W.K.; Lindsay, R.W.; Staffon, J.D. ); Gross, K.C. ); Doster, J.M. ); Edwards, R.M. (Pennsylvania State Univ., University P

    1990-01-01

    An integrated Supercomputing and AI environment utilizing a CRAY X-MP/216, a fiber-optic communications link, a distributed network of workstations and the Experimental Breeder Reactor II (EBR-II) Liquid Metal Reactor (LMR) and its associated instrumentation and control system is being developed at the Idaho National Engineering Laboratory (INEL). This paper summarizes various activities that make up this supercomputing and AI environment. 5 refs., 4 figs.

  20. | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) conducts and supports research to determine a person's risk of cancer and to find ways to reduce the risk. This knowledge is critical to making progress against cancer because risk varies over the lifespan as genetic and epigenetic changes can transform healthy tissue into invasive cancer.

  1. Order Division Automated System.

    ERIC Educational Resources Information Center

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  2. Solid State Division

    SciTech Connect

    Green, P.H.; Watson, D.M.

    1989-08-01

    This report contains brief discussions on work done in the Solid State Division of Oak Ridge National Laboratory. The topics covered are: Theoretical Solid State Physics; Neutron scattering; Physical properties of materials; The synthesis and characterization of materials; Ion beam and laser processing; and Structure of solids and surfaces. (LSP)

  3. | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) conducts and supports research to determine a person's risk of cancer and to find ways to reduce the risk. This knowledge is critical to making progress against cancer because risk varies over the lifespan as genetic and epigenetic changes can transform healthy tissue into invasive cancer.

  4. Cell division in Corynebacterineae

    PubMed Central

    Donovan, Catriona; Bramkamp, Marc

    2014-01-01

    Bacterial cells must coordinate a number of events during the cell cycle. Spatio-temporal regulation of bacterial cytokinesis is indispensable for the production of viable, genetically identical offspring. In many rod-shaped bacteria, precise midcell assembly of the division machinery relies on inhibitory systems such as Min and Noc. In rod-shaped Actinobacteria, for example Corynebacterium glutamicum and Mycobacterium tuberculosis, the divisome assembles in the proximity of the midcell region, however more spatial flexibility is observed compared to Escherichia coli and Bacillus subtilis. Actinobacteria represent a group of bacteria that spatially regulate cytokinesis in the absence of recognizable Min and Noc homologs. The key cell division steps in E. coli and B. subtilis have been subject to intensive study and are well-understood. In comparison, only a minimal set of positive and negative regulators of cytokinesis are known in Actinobacteria. Nonetheless, the timing of cytokinesis and the placement of the division septum is coordinated with growth as well as initiation of chromosome replication and segregation. We summarize here the current knowledge on cytokinesis and division site selection in the Actinobacteria suborder Corynebacterineae. PMID:24782835

  5. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    SciTech Connect

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  6. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  7. TOPICAL REVIEW: Carbon-based nanotechnology on a supercomputer

    NASA Astrophysics Data System (ADS)

    Tománek, David

    2005-04-01

    The quantum nature of phenomena dominating the behaviour of nanostructures raises new challenges when trying to predict and understand the physical behaviour of these systems. Addressing this challenge is imperative in view of the continuous reduction of device sizes, which is rapidly approaching the atomic level. Since even the most advanced experimental observations are subject to being fundamentally influenced by the measurement itself, new approaches must be sought to design and test future building blocks of nanotechnology. In this respect, high-performance computing, allowing predictive large-scale computer simulations, has emerged as an indispensable tool to foresee and interpret the physical behaviour of nanostructures, thus guiding and complementing the experiment. This contribution will review some of the more intriguing phenomena associated with nanostructured carbon, including fullerenes, nanotubes and diamondoids. Due to the stability of the sp2 bond, carbon fullerenes and nanotubes are thermally and mechanically extremely stable and chemically inert. They contract rather than expand at high temperatures, and are unparalleled thermal conductors. Nanotubes may turn into ballistic electron conductors or semiconductors, and even acquire a permanent magnetic moment. In nanostructures that form during a hierarchical self-assembly process, even defects may play a different, often helpful role. sp2 bonded nanostructures may change their shape globally by a sequence of bond rotations, which turn out to be intriguing multi-step processes. At elevated temperatures, and following photo-excitations, efficient self-healing processes may repair defects, thus answering an important concern in molecular electronics.

  8. Chemical Technology Division. Annual technical report, 1995

    SciTech Connect

    Laidler, J.J.; Myles, K.M.; Green, D.W.; McPheeters, C.C.

    1996-06-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1995 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) methods for treatment of hazardous waste and mixed hazardous/radioactive waste; (3) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (4) processes for separating and recovering selected elements from waste streams, concentrating low-level radioactive waste streams with advanced evaporator technology, and producing {sup 99}Mo from low-enriched uranium; (5) electrometallurgical treatment of different types of spent nuclear fuel in storage at Department of Energy sites; and (6) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems.

  9. Evaluation of the education program for Supercomputing `95

    SciTech Connect

    Caldwell, G.; Abbott, G.

    1995-12-31

    Evaluation of the SC `95 Education Program indicated a very high level of satisfaction with all aspects of the program. Teachers viewed the hands-on sessions and the opportunity to network with other education professionals as the most valuable aspects of the program. Longer and a greater number of grade-appropriate hands-on lessons were requested for next year`s education program. Several suggestions related to programmatic issues for inclusion in future education programs were made by teachers attending SC `95. These include: greater variety of topics for K--5 teachers, a C++ session, repeat sessions for hot topics such as JAVA, and additional sessions on assessment and evaluation. In addition, survey respondents requested structured, small group sessions in which experts present information related to topics such as grant writing, formulating lesson plans, and dealing with technology issues as related to educational reform. If the purpose of the SC Education Program is in the advancement in applying and educating the nation`s youth in the power of computational science, then submissions for papers, panels, and hands-on sessions should be critically evaluated with that in mind. One suggestion for future planning includes offering sessions which are consistent with the grade and experience levels of teachers who will be attending the conference, such as more sessions for K-5 teachers. Before accepting sessions for presentation, consideration might also be given to what format (i.e., lecture, hands-on, small group discussion, etc.) would be appropriate to facilitate implementation of these programs in the classroom. As computational science and the use of technology in the classroom matures the SC Education Program, needs to be reexamined to target to provide information not available locally to the education community.

  10. Painless Division with Doc Spitler's Magic Division Estimator.

    ERIC Educational Resources Information Center

    Spitler, Gail

    1981-01-01

    An approach to teaching pupils the long division algorithm that relies heavily on a consistent and logical approach to estimation is reviewed. Once learned, the division estimator can be used to support the standard repeated subtraction algorithm. (MP)

  11. Information Technology Division Technical Paper Abstracts 1995,

    DTIC Science & Technology

    2007-11-02

    Information Technology Division (ITD), one of the largest research and development collectives at the Naval Research Laboratory. The abstracts are organized into sections that represent the six branches with ITD: the Navy Center for Applied Research in Artificial Intelligence, Communications Systems, the Center for High Assurance Computer Systems, Transmission Technology, Advanced Information Technology , and the Center for Computational Science. Within each section, a list of branch papers published in 1993 and 1994 has also been included; abstracts

  12. 2016 T Division Lightning Talks

    SciTech Connect

    Ramsey, Marilyn Leann; Adams, Luke Clyde; Ferre, Gregoire Robing; Grantcharov, Vesselin; Iaroshenko, Oleksandr; Krishnapriyan, Aditi; Kurtakoti, Prajvala Kishore; Le Thien, Minh Quan; Lim, Jonathan Ng; Low, Thaddeus Song En; Lystrom, Levi Aaron; Ma, Xiaoyu; Nguyen, Hong T.; Pogue, Sabine Silvia; Orandle, Zoe Ann; Reisner, Andrew Ray; Revard, Benjamin Charles; Roy, Julien; Sandor, Csanad; Slavkova, Kalina Polet; Weichman, Kathleen Joy; Wu, Fei; Yang, Yang

    2016-11-29

    These are the slides for all of the 2016 T Division lightning talks. There are 350 pages worth of slides from different presentations, all of which cover different topics within the theoretical division at Los Alamos National Laboratory (LANL).

  13. Podcast: The Electronic Crimes Division

    EPA Pesticide Factsheets

    Sept 26, 2016. Chris Lukas, the Special Agent in Charge of the Electronic Crimes Division within the OIG's Office of Investigations talks about computer forensics, cybercrime in the EPA and his division's role in criminal investigations.

  14. Simulation Technology Research Division assessment of the IBM RISC SYSTEM/6000 Model 530 workstation

    SciTech Connect

    Valdez, G.D. ); Halbleib, J.A.; Kensek, R.P.; Lorence, L.J. )

    1990-11-01

    A workstation manufactured by International Business Machines Corporation (IBM) was loaned to the Simulation Technology Research Division for evaluation. We have found that these new UNIX workstations from IBM have superior cost to performance ratios compared to the CRAY supercomputers and Digital's VAX machines. Our appraisal of this workstation included floating-point performance, system and environment functionality, and cost effectiveness. Our assessment was based on a suite of radiation transport codes developed at Sandia that constitute the bulk of our division's computing workload. In this report, we also discuss our experience with features that are unique to this machine such as the AIX operating system and the XLF Fortran Compiler. The interoperability of the RS/6000 workstation with Sandia's network of CRAYs and VAXs was also assessed.

  15. Energy Systems Divisions

    NASA Technical Reports Server (NTRS)

    Applewhite, John

    2011-01-01

    This slide presentation reviews the JSC Energy Systems Divisions work in propulsion. Specific work in LO2/CH4 propulsion, cryogenic propulsion, low thrust propulsion for Free Flyer, robotic and Extra Vehicular Activities, and work on the Morpheus terrestrial free flyer test bed is reviewed. The back-up slides contain a chart with comparisons of LO2/LCH4 with other propellants, and reviewing the advantages especially for spacecraft propulsion.

  16. Division Quilts: A Measurement Model

    ERIC Educational Resources Information Center

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  17. Division Quilts: A Measurement Model

    ERIC Educational Resources Information Center

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  18. Biorepositories | Division of Cancer Prevention

    Cancer.gov

    Carefully collected and controlled high-quality human biospecimens, annotated with clinical data and properly consented for investigational use, are available through the Division of Cancer Prevention Biorepositories listed in the charts below. Biorepositories Managed by the Division of Cancer Prevention Biorepositories Supported by the Division of Cancer Prevention Related Biorepositories | Information about accessing biospecimens collected from DCP-supported clinical trials and projects.

  19. Physics division annual report 2005.

    SciTech Connect

    Glover, J.; Physics

    2007-03-12

    trapped in an atom trap for the first time, a major milestone in an innovative search for the violation of time-reversal symmetry. New results from HERMES establish that strange quarks carry little of the spin of the proton and precise results have been obtained at JLAB on the changes in quark distributions in light nuclei. New theoretical results reveal that the nature of the surfaces of strange quark stars. Green's function Monte Carlo techniques have been extended to scattering problems and show great promise for the accurate calculation, from first principles, of important astrophysical reactions. Flame propagation in type 1A supernova has been simulated, a numerical process that requires considering length scales that vary by factors of eight to twelve orders of magnitude. Argonne continues to lead in the development and exploitation of the new technical concepts that will truly make an advanced exotic beam facility, in the words of NSAC, 'the world-leading facility for research in nuclear structure and nuclear astrophysics'. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for these new capabilities hold the keys to unlocking important secrets of nature. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  20. Towards a bottom-up reconstitution of bacterial cell division.

    PubMed

    Martos, Ariadna; Jiménez, Mercedes; Rivas, Germán; Schwille, Petra

    2012-12-01

    The components of the bacterial division machinery assemble to form a dynamic ring at mid-cell that drives cytokinesis. The nature of most division proteins and their assembly pathway is known. Our knowledge about the biochemical activities and protein interactions of some key division elements, including those responsible for correct ring positioning, has progressed considerably during the past decade. These developments, together with new imaging and membrane reconstitution technologies, have triggered the 'bottom-up' synthetic approach aiming at reconstructing bacterial division in the test tube, which is required to support conclusions derived from cellular and molecular analysis. Here, we describe recent advances in reconstituting Escherichia coli minimal systems able to reproduce essential functions, such as the initial steps of division (proto-ring assembly) and one of the main positioning mechanisms (Min oscillating system), and discuss future perspectives and experimental challenges.

  1. The Advanced Software Development and Commercialization Project

    SciTech Connect

    Gallopoulos, E. . Center for Supercomputing Research and Development); Canfield, T.R.; Minkoff, M.; Mueller, C.; Plaskacz, E.; Weber, D.P.; Anderson, D.M.; Therios, I.U. ); Aslam, S.; Bramley, R.; Chen, H.-C.; Cybenko, G.; Gallopoulos, E.; Gao, H.; Malony, A.; Sameh, A. . Center for Supercomputing Research

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time, on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.

  2. Intel 80860 or I860: The million transistor RISC microprocessor chip with supercomputer capability. April 1988-September 1989 (Citations from the Computer data base). Report for April 1988-September 1989

    SciTech Connect

    Not Available

    1989-10-01

    This bibliography contains citations concerning Intel's new microprocessor which has more than a million transistors and is capable of performing up to 80 million floating-point operations per second (80 mflops). The I860 (originally code named the N-10 during development) is to be used in workstation type applications. It will be suited for problems such as fluid dynamics, molecular modeling, structural analysis, and economic modeling which requires supercomputer number crunching and advanced graphics. (Contains 64 citations fully indexed and including a title list.)

  3. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  4. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  5. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  6. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    SciTech Connect

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  7. The transition of a real-time single-rotor helicopter simulation program to a supercomputer

    NASA Technical Reports Server (NTRS)

    Martinez, Debbie

    1995-01-01

    This report presents the conversion effort and results of a real-time flight simulation application transition to a CONVEX supercomputer. Enclosed is a detailed description of the conversion process and a brief description of the Langley Research Center's (LaRC) flight simulation application program structure. Currently, this simulation program may be configured to represent Sikorsky S-61 helicopter (a five-blade, single-rotor, commercial passenger-type helicopter) or an Army Cobra helicopter (either the AH-1 G or AH-1 S model). This report refers to the Sikorsky S-61 simulation program since it is the most frequently used configuration.

  8. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  9. Scaling radio astronomy signal correlation on heterogeneous supercomputers using variousdata distribution methodologies

    NASA Astrophysics Data System (ADS)

    Wang, Ruonan; Harris, Christopher

    2013-12-01

    Next generation radio telescopes will require orders of magnitude more computing power to provide a view of the universe with greater sensitivity. In the initial stages of the signal processing flow of a radio telescope, signal correlation is one of the largest challenges in terms of handling huge data throughput and intensive computations. We implemented a GPU cluster based software correlator with various data distribution models and give a systematic comparison based on testing results obtained using the Fornax supercomputer. By analyzing the scalability and throughput of each model, optimal approaches are identified across a wide range of problem sizes, covering the scale of next generation telescopes.

  10. Learning about supercomputers on a microcomputer with no keyboard: a science museum exhibit

    SciTech Connect

    Stoddard, M.; Buzbee, B.L.

    1984-01-01

    A microcomputer exhibit was developed to acquaint visitors of the Los Alamos National Laboratory's Bradbury Science Museum with supercomputers and computer-graphics applications. The exhibit is highly interactive, yet the visitor uses only the touch panel of the CD 110 microcomputer. The museum environment presented many constraints to the development team, yet the five minute exhibit has been extremely popular with visitors. Design details of how each constraint was dealt with to produce a motivating and instructional exhibit are provided. Although the program itself deals with a subject area primarily applicable to Los Alamos, the design features are transferrable to other courseware where motivational and learning aspects are of equal importance.

  11. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  12. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    PubMed

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  13. A New Hydrodynamic Model for Numerical Simulation of Interacting Galaxies on Intel Xeon Phi Supercomputers

    NASA Astrophysics Data System (ADS)

    Kulikov, Igor; Chernykh, Igor; Tutukov, Alexander

    2016-05-01

    This paper presents a new hydrodynamic model of interacting galaxies based on the joint solution of multicomponent hydrodynamic equations, first moments of the collisionless Boltzmann equation and the Poisson equation for gravity. Using this model, it is possible to formulate a unified numerical method for solving hyperbolic equations. This numerical method has been implemented for hybrid supercomputers with Intel Xeon Phi accelerators. The collision of spiral and disk galaxies considering the star formation process, supernova feedback and molecular hydrogen formation is shown as a simulation result.

  14. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    PubMed

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  15. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer

    PubMed Central

    Ellingson, Sally R.; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C.

    2013-01-01

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm. PMID:24729746

  16. Structural analysis of shallow shells on the CRAY Y-MP supercomputer

    NASA Astrophysics Data System (ADS)

    Qatu, M. S.; Bataineh, A. M.

    1992-10-01

    Structural analysis of shallow shells is performed and relatively accurate displacements and stresses are obtained. An energy method, which is an extension of the Ritz method, is used in the analysis. Algebraic polynomials are used as displacement functions. The numerical problems which resulted in inaccurate stresses in previous publications are improved by making use of symmetry and performing the computations on a supercomputer which has 29-digit double-precision arithmatics. Curvature effects upon deflections and stress resultants of shallow shells with cantilever and 'semi-cantilever' boundaries are studied.

  17. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  18. Optimizing the Point-In-Box Search Algorithm for the Cray Y-MP(TM) Supercomputer

    SciTech Connect

    Attaway, S.W.; Davis, M.E.; Heinstein, M.W.; Swegle, J.S.

    1998-12-23

    Determining the subset of points (particles) in a problem domain that are contained within certain spatial regions of interest can be one of the most time-consuming parts of some computer simulations. Examples where this 'point-in-box' search can dominate the computation time include (1) finite element contact problems; (2) molecular dynamics simulations; and (3) interactions between particles in numerical methods, such as discrete particle methods or smooth particle hydrodynamics. This paper describes methods to optimize a point-in-box search algorithm developed by Swegle that make optimal use of the architectural features of the Cray Y-MP Supercomputer.

  19. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  20. Development of the general interpolants method for the CYBER 200 series of supercomputers

    NASA Technical Reports Server (NTRS)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  1. The Spanish Blue Division

    DTIC Science & Technology

    2005-03-18

    the German North Army Group. WORD COUNT=5933 20 ENDNOTES 1 Torres, Francisco. La Divisi6n Azul50 AFos Despu6s. Madrid: Editorial Fuerza Nueva , 1991,31...osDespu6s. Madrid: Editorial Fuerza Nueva , 1991,47. ’ Kleinfield, Gerald R. and Tambs, Lewis A. La Divisi6n espatjola de Hitler. Madrid: Editorial San Martin...1983, 25. ’ Torres, Francisco. La Divisi6n Azu150AiosDespu6s. Madrid: Editorial Fuerza Nueva , 1991,53 6 The Division was popularly known as the Blue

  2. Artificial cell division.

    PubMed

    Mange, Daniel; Stauffer, André; Petraglio, Enrico; Tempesti, Gianluca

    2004-01-01

    After a survey of the theory and some realizations of self-replicating machines, this paper presents a novel self-replicating loop endowed with universal construction and computation properties. Based on the hardware implementation of the so-called Tom Thumb algorithm, the design of this loop leads to a new kind of cellular automaton made of a processing and a control units. The self-replication of the Swiss flag serves as an artificial cell division example of the loop which, according to autopoietic evaluation criteria, corresponds to a cell showing the phenomenology of a living system.

  3. Chemical Technology Division annual technical report, 1994

    SciTech Connect

    1995-06-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1994 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion; (3) methods for treatment of hazardous waste and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from waste streams, concentrating radioactive waste streams with advanced evaporator technology, and producing {sup 99}Mo from low-enriched uranium for medical applications; (6) electrometallurgical treatment of the many different types of spent nuclear fuel in storage at Department of Energy sites; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, molecular sieve structures, and impurities in scrap copper and steel; and the geochemical processes involved in mineral/fluid interfaces and water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  4. Scheduling Supercomputers.

    DTIC Science & Technology

    1983-02-01

    no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

  5. Cell division cycle 45 promotes papillary thyroid cancer progression via regulating cell cycle.

    PubMed

    Sun, Jing; Shi, Run; Zhao, Sha; Li, Xiaona; Lu, Shan; Bu, Hemei; Ma, Xianghua

    2017-05-01

    Cell division cycle 45 was reported to be overexpressed in some cancer-derived cell lines and was predicted to be a candidate oncogene in cervical cancer. However, the clinical and biological significance of cell division cycle 45 in papillary thyroid cancer has never been investigated. We determined the expression level and clinical significance of cell division cycle 45 using The Cancer Genome Atlas, quantitative real-time polymerase chain reaction, and immunohistochemistry. A great upregulation of cell division cycle 45 was observed in papillary thyroid cancer tissues compared with adjacent normal tissues. Furthermore, overexpression of cell division cycle 45 positively correlates with more advanced clinical characteristics. Silence of cell division cycle 45 suppressed proliferation of papillary thyroid cancer cells via G1-phase arrest and inducing apoptosis. The oncogenic activity of cell division cycle 45 was also confirmed in vivo. In conclusion, cell division cycle 45 may serve as a novel biomarker and a potential therapeutic target for papillary thyroid cancer.

  6. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP v1.0) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    NASA Astrophysics Data System (ADS)

    Gasper, F.; Goergen, K.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.; Kollet, S.

    2014-10-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing nonlinear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP v1.0) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm using the OASIS suite of external couplers, and require memory and load balancing considerations in the exchange of the coupling fields between different component models and the allocation of computational resources, respectively. Using the advanced profiling and tracing tool Scalasca to determine an optimum load balancing leads to a 19% speedup. In massively parallel supercomputer environments, the coupler OASIS-MCT is recommended, which resolves memory limitations that may be significant in case of very large computational domains and exchange fields as they occur in these specific test cases and in many applications in terrestrial research. However, model I/O and initialization in the petascale range still require major attention, as they constitute true big data challenges in light of future exascale computing resources. Based on a factor-two speedup due to compiler optimizations, a refactored coupling interface using OASIS-MCT and an optimum load balancing, the problem size in a weak scaling study can be increased by a factor of 64 from 512 to 32 768 processes while maintaining parallel efficiencies above 80% for the component models.

  7. Deconstructing Calculation Methods, Part 4: Division

    ERIC Educational Resources Information Center

    Thompson, Ian

    2008-01-01

    In the final article of a series of four, the author deconstructs the primary national strategy's approach to written division. The approach to division is divided into five stages: (1) mental division using partition; (2) short division of TU / U; (3) "expanded" method for HTU / U; (4) short division of HTU / U; and (5) long division.…

  8. Vector performance analysis of three supercomputers - Cray-2, Cray Y-MP, and ETA10-Q

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod A.

    1989-01-01

    Results are presented of a series of experiments to study the single-processor performance of three supercomputers: Cray-2, Cray Y-MP, and ETA10-Q. The main object of this study is to determine the impact of certain architectural features on the performance of modern supercomputers. Features such as clock period, memory links, memory organization, multiple functional units, and chaining are considered. A simple performance model is used to examine the impact of these features on the performance of a set of basic operations. The results of implementing this set on these machines for three vector lengths and three memory strides are presented and compared. For unit stride operations, the Cray Y-MP outperformed the Cray-2 by as much as three times and the ETA10-Q by as much as four times for these operations. Moreover, unlike the Cray-2 and ETA10-Q, even-numbered strides do not cause a major performance degradation on the Cray Y-MP. Two numerical algorithms are also used for comparison. For three problem sizes of both algorithms, the Cray Y-MP outperformed the Cray-2 by 43 percent to 68 percent and the ETA10-Q by four to eight times.

  9. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  10. ASCI Red -- Experiences and lessons learned with a massively parallel teraFLOP supercomputer

    SciTech Connect

    Christon, M.A.; Crawford, D.A.; Hertel, E.S.; Peery, J.S.; Robinson, A.C.

    1997-06-01

    The Accelerated Strategic Computing Initiative (ASCI) program involves Sandia, Los Alamos and Lawrence Livermore National Laboratories. At Sandia National Laboratories, ASCI applications include large deformation transient dynamics, shock propagation, electromechanics, and abnormal thermal environments. In order to resolve important physical phenomena in these problems, it is estimated that meshes ranging from 10{sup 6} to 10{sup 9} grid points will be required. The ASCI program is relying on the use of massively parallel supercomputers initially capable of delivering over 1 TFLOPs to perform such demanding computations. The ASCI Red machine at Sandia National Laboratories consists of over 4,500 computational nodes with a peak computational rate of 1.8 TFLOPs, 567 GBytes of memory, and 2 TBytes of disk storage. Regardless of the peak FLOP rate, there are many issues surrounding the use of massively parallel supercomputers in a production environment. These issues include parallel I/O, mesh generation, visualization, archival storage, high-bandwidth networking and the development of parallel algorithms. In order to illustrate these issues and their solution with respect to ASCI Red, demonstration calculations of time-dependent buoyancy-dominated plumes, electromechanics, and shock propagation will be presented.

  11. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    NASA Astrophysics Data System (ADS)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  12. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    SciTech Connect

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  13. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    SciTech Connect

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  14. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Technical Reports Server (NTRS)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  15. Groundwater cooling of a supercomputer in Perth, Western Australia: hydrogeological simulations and thermal sustainability

    NASA Astrophysics Data System (ADS)

    Sheldon, Heather A.; Schaubs, Peter M.; Rachakonda, Praveen K.; Trefry, Michael G.; Reid, Lynn B.; Lester, Daniel R.; Metcalfe, Guy; Poulet, Thomas; Regenauer-Lieb, Klaus

    2015-12-01

    Groundwater cooling (GWC) is a sustainable alternative to conventional cooling technologies for supercomputers. A GWC system has been implemented for the Pawsey Supercomputing Centre in Perth, Western Australia. Groundwater is extracted from the Mullaloo Aquifer at 20.8 °C and passes through a heat exchanger before returning to the same aquifer. Hydrogeological simulations of the GWC system were used to assess its performance and sustainability. Simulations were run with cooling capacities of 0.5 or 2.5 Mega Watts thermal (MWth), with scenarios representing various combinations of pumping rate, injection temperature and hydrogeological parameter values. The simulated system generates a thermal plume in the Mullaloo Aquifer and overlying Superficial Aquifer. Thermal breakthrough (transfer of heat from injection to production wells) occurred in 2.7-4.3 years for a 2.5 MWth system. Shielding (reinjection of cool groundwater between the injection and production wells) resulted in earlier thermal breakthrough but reduced the rate of temperature increase after breakthrough, such that shielding was beneficial after approximately 5 years pumping. Increasing injection temperature was preferable to increasing flow rate for maintaining cooling capacity after thermal breakthrough. Thermal impacts on existing wells were small, with up to 10 wells experiencing a temperature increase ≥ 0.1 °C (largest increase 6 °C).

  16. The Q Continuum Simulation: Harnessing the Power of GPU Accelerated Supercomputers

    NASA Astrophysics Data System (ADS)

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris; Habib, Salman; Pope, Adrian; Finkel, Hal; Rizzi, Silvio; Insley, Joe; Bhattacharya, Suman

    2015-08-01

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the “Q Continuum” cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of {(1300 {Mpc})}3 and evolves more than half a trillion particles, leading to a particle mass resolution of {m}{{p}}≃ 1.5\\cdot {10}8 {M}⊙ . At this mass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analytic modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan’s GPU accelerators.

  17. The Q Continuum Simulation: Harnessing the Power of GPU Accelerated Supercomputers

    SciTech Connect

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris; Habib, Salman; Pope, Adrian; Finkel, Hal; Rizzi, Silvio; Insley, Joe; Bhattacharya, Suman

    2015-08-21

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analytic modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.

  18. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    SciTech Connect

    Kimball, Clyde; Karonis, Nicholas; Lurio, Laurence; Piot, Philippe; Xiao, Zhili; Glatz, Andreas; Pohlman, Nicholas; Hou, Minmei; Demir, Veysel; Song, Jie; Duffin, Kirk; Johns, Mitrick; Sims, Thomas; Yin, Yanbin

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  19. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  20. Supercomputing enabling exhaustive statistical analysis of genome wide association study data: Preliminary results.

    PubMed

    Reumann, Matthias; Makalic, Enes; Goudey, Benjamin W; Inouye, Michael; Bickerstaffe, Adrian; Bui, Minh; Park, Daniel J; Kapuscinski, Miroslaw K; Schmidt, Daniel F; Zhou, Zeyu; Qian, Guoqi; Zobel, Justin; Wagner, John; Hopper, John L

    2012-01-01

    Most published GWAS do not examine SNP interactions due to the high computational complexity of computing p-values for the interaction terms. Our aim is to utilize supercomputing resources to apply complex statistical techniques to the world's accumulating GWAS, epidemiology, survival and pathology data to uncover more information about genetic and environmental risk, biology and aetiology. We performed the Bayesian Posterior Probability test on a pseudo data set with 500,000 single nucleotide polymorphism and 100 samples as proof of principle. We carried out strong scaling simulations on 2 to 4,096 processing cores with factor 2 increments in partition size. On two processing cores, the run time is 317h, i.e. almost two weeks, compared to less than 10 minutes on 4,096 processing cores. The speedup factor is 2,020 that is very close to the theoretical value of 2,048. This work demonstrates the feasibility of performing exhaustive higher order analysis of GWAS studies using independence testing for contingency tables. We are now in a position to employ supercomputers with hundreds of thousands of threads for higher order analysis of GWAS data using complex statistics.

  1. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    SciTech Connect

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby; Worley, Patrick H

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  2. The case of the missing supercomputer performance : achieving optimal performance on the 8, 192 processors of ASCI Q

    SciTech Connect

    Petrini, F.; Kerbyson, D. J.; Pakin, S. D.

    2003-01-01

    In this paper we describe how we improved the effective performance of ASCI Q, the world's second-fastest supercomputer, to meet our expectations. Using an arsenal of performance-analysis techniques including analytical models, custom microbenchmarks, full applications, and simulators, we succeeded in observing a serious-but previously undetectable-performance problem. We identified the source of the problem, eliminated the problem, and 'closed the loop' by demonstrating improved application performance. We present our methodology and provide insight into performance analysis that is immediately applicable to other large-scale cluster-based supercomputers.

  3. Understanding Microbial Divisions of Labor

    PubMed Central

    Zhang, Zheren; Claessen, Dennis; Rozen, Daniel E.

    2016-01-01

    Divisions of labor are ubiquitous in nature and can be found at nearly every level of biological organization, from the individuals of a shared society to the cells of a single multicellular organism. Many different types of microbes have also evolved a division of labor among its colony members. Here we review several examples of microbial divisions of labor, including cases from both multicellular and unicellular microbes. We first discuss evolutionary arguments, derived from kin selection, that allow divisions of labor to be maintained in the face of non-cooperative cheater cells. Next we examine the widespread natural variation within species in their expression of divisions of labor and compare this to the idea of optimal caste ratios in social insects. We highlight gaps in our understanding of microbial caste ratios and argue for a shift in emphasis from understanding the maintenance of divisions of labor, generally, to instead focusing on its specific ecological benefits for microbial genotypes and colonies. Thus, in addition to the canonical divisions of labor between, e.g., reproductive and vegetative tasks, we may also anticipate divisions of labor to evolve to reduce the costly production of secondary metabolites or secreted enzymes, ideas we consider in the context of streptomycetes. The study of microbial divisions of labor offers opportunities for new experimental and molecular insights across both well-studied and novel model systems. PMID:28066387

  4. 2003 Chemical Engineering Division annual technical report.

    SciTech Connect

    Lewis, D.; Graziano, D.; Miller, J. F.; Vandegrift, G.

    2004-04-26

    The Chemical Engineering Division is one of six divisions within the Engineering Research Directorate at Argonne National Laboratory, one of the U.S. government's oldest and largest research laboratories. The University of Chicago oversees the laboratory on behalf of the U.S. Department of Energy (DOE). Argonne's mission is to conduct basic scientific research, to operate national scientific facilities, to enhance the nation's energy resources, to promote national security, and to develop better ways to manage environmental problems. Argonne has the further responsibility of strengthening the nation's technology base by developing innovative technology and transferring it to industry. The Division is a diverse early-stage engineering organization, specializing in the treatment of spent nuclear fuel, development of advanced electrochemical power sources, and management of both high- and low-level nuclear wastes. Additionally, the Division operates the Analytical Chemistry Laboratory, which provides a broad range of analytical services to Argonne and other organizations. The Division is multidisciplinary. Its people have formal training in chemistry; physics; materials science; and electrical, mechanical, chemical, and nuclear engineering. They are specialists in electrochemistry, ceramics, metallurgy, catalysis, materials characterization, nuclear magnetic resonance, repository science, and the nuclear fuel cycle. Our staff have experience working in and collaborating with university, industry and government research and development laboratories throughout the world. Our wide-ranging expertise finds ready application in solving energy, national security, and environmental problems. Division personnel are frequently called on by governmental and industrial organizations for advice and contributions to problem solving in areas that intersect present and past Division programs and activities. Currently, we are engaged in the development of several technologies of

  5. Computational fluid dynamics: Complex flows requiring supercomputers. (Latest citations from the INSPEC: Information services for the Physics and Engineering Communities database). Published Search

    SciTech Connect

    Not Available

    1993-08-01

    The bibliography contains citations concerning computational fluid dynamics (CFD), a new technology in computational science for complex flow simulations using supercomputers. Citations discuss the design, analysis, and performance evaluation of aircraft, rockets and missiles, and automobiles. References to supercomputers, array processes, parallel processes, and computational software packages are included. (Contains 250 citations and includes a subject term index and title list.)

  6. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    NASA Astrophysics Data System (ADS)

    Gasper, F.; Goergen, K.; Kollet, S.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.

    2014-06-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved.

  7. Physics division annual report 1999

    SciTech Connect

    Thayer, K., ed.; Physics

    2000-12-06

    This report summarizes the research performed in the past year in the Argonne Physics Division. The Division's programs include operation of ATLAS as a national heavy-ion user facility, nuclear structure and reaction research with beams of heavy ions, accelerator research and development especially in superconducting radio frequency technology, nuclear theory and medium energy nuclear physics. The Division took significant strides forward in its science and its initiatives for the future in the past year. Major progress was made in developing the concept and the technology for the future advanced facility of beams of short-lived nuclei, the Rare Isotope Accelerator. The scientific program capitalized on important instrumentation initiatives with key advances in nuclear science. In 1999, the nuclear science community adopted the Argonne concept for a multi-beam superconducting linear accelerator driver as the design of choice for the next major facility in the field a Rare Isotope Accelerator (WA) as recommended by the Nuclear Science Advisory Committee's 1996 Long Range Plan. Argonne has made significant R&D progress on almost all aspects of the design concept including the fast gas catcher (to allow fast fragmentation beams to be stopped and reaccelerated) that in large part defined the RIA concept the superconducting rf technology for the driver accelerator, the multiple-charge-state concept (to permit the facility to meet the design intensity goals with existing ion-source technology), and designs and tests of high-power target concepts to effectively deal with the full beam power of the driver linac. An NSAC subcommittee recommended the Argonne concept and set as tie design goal Uranium beams of 100-kwatt power at 400 MeV/u. Argonne demonstrated that this goal can be met with an innovative, but technically in-hand, design. The heavy-ion research program focused on GammaSphere, the premier facility for nuclear structure gamma-ray studies. One example of the

  8. Physics Division annual report 2004.

    SciTech Connect

    Glover, J.

    2006-04-06

    lead in the development and exploitation of the new technical concepts that will truly make RIA, in the words of NSAC, ''the world-leading facility for research in nuclear structure and nuclear astrophysics''. The performance standards for new classes of superconducting cavities continue to increase. Driver linac transients and faults have been analyzed to understand reliability issues and failure modes. Liquid-lithium targets were shown to successfully survive the full-power deposition of a RIA beam. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for RIA holds the keys to unlocking important secrets of nature. The work described here shows how far we have come and makes it clear we know the path to meet these intellectual challenges. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  9. Lightning Talks 2015: Theoretical Division

    SciTech Connect

    Shlachter, Jack S.

    2015-11-25

    This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.

  10. The Division of Household Labor.

    ERIC Educational Resources Information Center

    Spitze, Glenna D.; Huber, Joan

    A study was conducted to test the following hypotheses concerning division of household labor (DOHL) between husbands and wives: (1) the division of household labor is somewhat affected by the availability of time, especially the wife's time; (2) there are strong effects of relative power, as measured by market-related resources, marital…

  11. Division II: Sun and Heliosphere

    NASA Astrophysics Data System (ADS)

    Melrose, Donald B.; Martínez Pillet, Valentin; Webb, David F.; van Driel-Gesztelyi, Lidia; Bougeret, Jean-Louis; Klimchuk, James A.; Kosovichev, Alexander; von Steiger, Rudolf

    Division II of the IAU provides a forum for astronomers and astrophysicists studying a wide range of phenomena related to the structure, radiation and activity of the Sun, and its interaction with the Earth and the rest of the solar system. Division II encompasses three Commissions, 10, 12 and 49, and four Working Groups.

  12. Division III--Another Ballgame.

    ERIC Educational Resources Information Center

    Grites, Thomas J.; James, G. Larry

    1986-01-01

    The non-scholarship athletes of Division III represent a substantial group of advisees that are similar to, and yet different from the scholarship athlete. Division III student-athletes, their characteristics, situations, and needs are examined and specific efforts to improve their quality of student life are identified. (MLW)

  13. Chemical Technology Division, Annual technical report, 1991

    SciTech Connect

    Not Available

    1992-03-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1991 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources; chemistry of superconducting oxides and other materials of interest with technological application; interfacial processes of importance to corrosion science, catalysis, and high-temperature superconductivity; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  14. Physics Division computer facilities

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  15. Mixed precision numerical weather prediction on hybrid GPU-CPU supercomputers

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Osuna, Carlos; Spoerri, Pascal; Osterried, Katherine; Charpilloz, Christophe; Fuhrer, Oliver

    2017-04-01

    A new version of the climate and weather model COSMO that runs faster on traditional high performance computing systems with CPUs as well as on heterogeneous architectures using graphics processing units (GPUs) has been developed. The model was in addition adapted to be able to run in "single precision" mode. After discussing the key changes introduced in this new model version and the tools used in the porting approach, we present 3 applications, namely the MeteoSwiss operational weather prediction system, COSMO-LEPS and the CALMO project, which already take advantage of the performance improvement, up to a factor 4, by running on GPU system and using the single precision mode. We discuss how the code changes open new perspectives for scientific research and can enable researchers to get access to a new class of supercomputers.

  16. ExaML version 3: a tool for phylogenomic analyses on supercomputers

    PubMed Central

    Kozlov, Alexey M.; Aberer, Andre J.; Stamatakis, Alexandros

    2015-01-01

    Motivation: Phylogenies are increasingly used in all fields of medical and biological research. Because of the next generation sequencing revolution, datasets used for conducting phylogenetic analyses grow at an unprecedented pace. We present ExaML version 3, a dedicated production-level code for inferring phylogenies on whole-transcriptome and whole-genome alignments using supercomputers. Results: We introduce several improvements and extensions to ExaML: Extensions of substitution models and supported data types, the integration of a novel load balance algorithm as well as a parallel I/O optimization that significantly improve parallel efficiency, and a production-level implementation for Intel MIC-based hardware platforms. Availability and implementation: The code is available under GNU GPL at https://github.com/stamatak/ExaML. Contact: Alexandros.Stamatakis@h-its.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25819675

  17. ExaML version 3: a tool for phylogenomic analyses on supercomputers.

    PubMed

    Kozlov, Alexey M; Aberer, Andre J; Stamatakis, Alexandros

    2015-08-01

    Phylogenies are increasingly used in all fields of medical and biological research. Because of the next generation sequencing revolution, datasets used for conducting phylogenetic analyses grow at an unprecedented pace. We present ExaML version 3, a dedicated production-level code for inferring phylogenies on whole-transcriptome and whole-genome alignments using supercomputers. We introduce several improvements and extensions to ExaML: Extensions of substitution models and supported data types, the integration of a novel load balance algorithm as well as a parallel I/O optimization that significantly improve parallel efficiency, and a production-level implementation for Intel MIC-based hardware platforms. © The Author 2015. Published by Oxford University Press.

  18. Preparing for in situ processing on upcoming leading-edge supercomputers

    DOE PAGES

    Kress, James; Churchill, Randy Michael; Klasky, Scott; ...

    2016-10-01

    High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less

  19. Preparing for in situ processing on upcoming leading-edge supercomputers

    SciTech Connect

    Kress, James; Churchill, Randy Michael; Klasky, Scott; Kim, Mark; Childs, Hank; Pugmire, David

    2016-10-01

    High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuing to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.

  20. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  1. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  2. Comparative performance evaluation of two supercomputers: CDC Cyber-205 and CRI Cray-1

    SciTech Connect

    Bucher, I.Y.; Moore, J.W.

    1981-01-01

    This report compares the performance of Control Data Corporation's newest supercomputer, the Cyber-205, with the Cray Research, Inc. Cray-1, currently the Laboratory's largest mainframe. The rationale of our benchmarking effort is discussed. Results are presented of tests to determine the speed of basic arithmetic operations, of runs using our standard benchmark programs, and of runs using three codes that have been optimized for both machines: a linear system solver, a model hydrodynamics code, and parts of a plasma simulation code. It is concluded that the speed of the Cyber-205 for memory-to-memory operations on vectors stored in consecutive locations is considerably faster than that of the Cray-1. However, the overall performance of the machine is not quite equal to that of the Cray for tasks of interest to the Laboratory as represented by our benchmark set.

  3. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    SciTech Connect

    Gallarno, George; Rogers, James H; Maxwell, Don E

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  4. Division x: Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Taylor, Russ; Chapman, Jessica; Rendong, Nan; Carilli, Christopher; Giovannini, Gabriele; Hills, Richard; Hirabayashi, Hisashi; Jonas, Justin; Lazio, Joseph; Morganti, Raffaella; Rubio, Monica; Shastri, Prajval

    2012-04-01

    This triennium has seen a phenomenal investment in development of observational radio astronomy facilities in all parts of the globe at a scale that significantly impacts the international community. This includes both major enhancements such as the transition from the VLA to the EVLA in North America, and the development of new facilities such as LOFAR, ALMA, FAST, and Square Kilometre Array precursor telescopes in Australia and South Africa. These developments are driven by advances in radio-frequency, digital and information technologies that tremendously enhance the capabilities in radio astronomy. These new developments foreshadow major scientific advances driven by radio observations in the next triennium. We highlight these facility developments in section 3 of this report. A selection of science highlight from this triennium are summarized in section 2.

  5. Conditions and Features of Paleoproterozoic Continental Subduction from Supercomputer Modelling Results

    NASA Astrophysics Data System (ADS)

    Zavyalov, Sergey; Zakharov, Vladimir

    2017-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. The presence of ultra-high pressure metamorphic (UHPM) rocks in collisional orogens is considered reliable indicator of continental subduction. Low spread of Precambrian UHPM terranes gives reason to believe that subduction of continental crust was not common. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 140-250 km thick, with convergence rate - 5 cm/year. In the model, the upper mantle temperature is 130-150⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. The results have shown that even in the Paleoproterozoic conditions continental subduction is widespread process. The primary parameter, which has the most significant influence on continental subduction style is composition of the continental crust. The 2 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust. Continental subduction with the felsic crust is short-termed and lasts less than 5 Myr. Rocks exhume very fast (< 1 Myr). In the case of basic lower crust, a continental subduction is more stable and last over 15 Myr. This work was supported by the Supercomputing Centre of Lomonosov Moscow State University.

  6. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    NASA Astrophysics Data System (ADS)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  7. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  8. The Office of the Materials Division

    NASA Technical Reports Server (NTRS)

    Ramsey, amanda J.

    2004-01-01

    I was assigned to the Materials Division, which consists of the following branches; the Advanced Metallics Branch/5120-RMM, Ceramics Branch/5130-RMC, Polymers Branch/5150-RMP, and the Durability and Protective Coatings Branch/5160-RMD. Mrs. Pamela Spinosi is my assigned mentor. She was assisted by Ms.Raysa Rodriguez/5100-RM and Mrs.Denise Prestien/5100-RM, who are both employed by InDyne, Inc. My primary assignment this past summer was working directly with Ms. Rodriguez, assisting her with setting up the Integrated Financial Management Program (IFMP) 5130-RMC/Branch procedures and logs. These duties consisted of creating various spreadsheets for each individual branch member, which were updated daily. It was not hard to familiarize myself with these duties since this is my second summer working with Ms Rodriguez at NASA Glenn Research Center. RMC ordering laboratory, supplies and equipment for the Basic Materials Laboratory (Building 106) using the IF'MP/Purchase Card (P-card), a NASA-wide software program. I entered into the IFMP/Travel and Requisitions System, new Travel Authorizations for the 5130-RMC Civil Servant Branch Members. I also entered and completed Travel Vouchers for the 5130-RMC Ceramics Branch. I assisted in the Division Office creating new Emergency Contact list for the Materials Division. I worked with Dr. Hugh Gray, the Division Chief, and Dr. Ajay Misra, the 5130-RMC Branch Chief, on priority action items, with a close deadline, for a large NASA Proposal. Another project was working closely with Ms. Rodriguez in organizing and preparing for Dr. Ajay K. Misra's SESCDP (two year detail). This consisted of organizing files, file folders, personal information, and recording all data material onto CD's and printing all presentations for display in binders. I attended numerous Branch meetings, and observed many changes in the Branch Management organization.

  9. The Office of the Materials Division

    NASA Technical Reports Server (NTRS)

    Ramsey, amanda J.

    2004-01-01

    I was assigned to the Materials Division, which consists of the following branches; the Advanced Metallics Branch/5120-RMM, Ceramics Branch/5130-RMC, Polymers Branch/5150-RMP, and the Durability and Protective Coatings Branch/5160-RMD. Mrs. Pamela Spinosi is my assigned mentor. She was assisted by Ms.Raysa Rodriguez/5100-RM and Mrs.Denise Prestien/5100-RM, who are both employed by InDyne, Inc. My primary assignment this past summer was working directly with Ms. Rodriguez, assisting her with setting up the Integrated Financial Management Program (IFMP) 5130-RMC/Branch procedures and logs. These duties consisted of creating various spreadsheets for each individual branch member, which were updated daily. It was not hard to familiarize myself with these duties since this is my second summer working with Ms Rodriguez at NASA Glenn Research Center. RMC ordering laboratory, supplies and equipment for the Basic Materials Laboratory (Building 106) using the IF'MP/Purchase Card (P-card), a NASA-wide software program. I entered into the IFMP/Travel and Requisitions System, new Travel Authorizations for the 5130-RMC Civil Servant Branch Members. I also entered and completed Travel Vouchers for the 5130-RMC Ceramics Branch. I assisted in the Division Office creating new Emergency Contact list for the Materials Division. I worked with Dr. Hugh Gray, the Division Chief, and Dr. Ajay Misra, the 5130-RMC Branch Chief, on priority action items, with a close deadline, for a large NASA Proposal. Another project was working closely with Ms. Rodriguez in organizing and preparing for Dr. Ajay K. Misra's SESCDP (two year detail). This consisted of organizing files, file folders, personal information, and recording all data material onto CD's and printing all presentations for display in binders. I attended numerous Branch meetings, and observed many changes in the Branch Management organization.

  10. Accelerator and Fusion Research Division 1989 summary of activities

    SciTech Connect

    Not Available

    1990-06-01

    This report discusses the research being conducted at Lawrence Berkeley Laboratory's Accelerator and Fusion Research Division. The main topics covered are: heavy-ion fusion accelerator research; magnetic fusion energy; advanced light source; center for x-ray optics; exploratory studies; high-energy physics technology; and bevalac operations.

  11. Paradigms in Physics: A New Upper-Division Curriculum.

    ERIC Educational Resources Information Center

    Manogue, Corinne A.; Siemens, Philip J.; Tate, Janet; Browne, Kerry; Niess, Margaret L.; Wolfer, Adam J.

    2001-01-01

    Describes a new curriculum for the final two years of a B.S. program in physics. Defines junior progress from a descriptive, lower-division understanding to an advanced analysis of a topic by phenomenon rather than discipline. (Contains 17 references.) (Author/YDS)

  12. Paradigms in Physics: A New Upper-Division Curriculum.

    ERIC Educational Resources Information Center

    Manogue, Corinne A.; Siemens, Philip J.; Tate, Janet; Browne, Kerry; Niess, Margaret L.; Wolfer, Adam J.

    2001-01-01

    Describes a new curriculum for the final two years of a B.S. program in physics. Defines junior progress from a descriptive, lower-division understanding to an advanced analysis of a topic by phenomenon rather than discipline. (Contains 17 references.) (Author/YDS)

  13. Physics division annual report - October 2000.

    SciTech Connect

    Thayer, K.

    2000-10-16

    This report summarizes the research performed in the past year in the Argonne Physics Division. The Division's programs include operation of ATLAS as a national heavy-ion user facility, nuclear structure and reaction research with beams of heavy ions, accelerator research and development especially in superconducting radio frequency technology, nuclear theory and medium energy nuclear physics. The Division took significant strides forward in its science and its initiatives for the future in the past year. Major progress was made in developing the concept and the technology for the future advanced facility of beams of short-lived nuclei, the Rare Isotope Accelerator. The scientific program capitalized on important instrumentation initiatives with key advances in nuclear science. In 1999, the nuclear science community adopted the Argonne concept for a multi-beam superconducting linear accelerator driver as the design of choice for the next major facility in the field a Rare Isotope Accelerator (RIA) as recommended by the Nuclear Science Advisory Committee's 1996 Long Range Plan. Argonne has made significant R&D progress on almost all aspects of the design concept including the fast gas catcher (to allow fast fragmentation beams to be stopped and reaccelerated) that in large part, defined the RIA concept the superconducting rf technology for the driver accelerator, the multiple-charge-state concept (to permit the facility to meet the design intensity goals with existing ion-source technology), and designs and tests of high-power target concepts to effectively deal with the full beam power of the driver linac. An NSAC subcommittee recommended the Argonne concept and set as tie design goal Uranium beams of 100-kwatt power at 400 MeV/u. Argonne demonstrated that this goal can be met with an innovative, but technically in-hand, design.

  14. Chemical Technology Division annual technical report, 1993

    SciTech Connect

    Battles, J.E.; Myles, K.M.; Laidler, J.J.; Green, D.W.

    1994-04-01

    Chemical Technology (CMT) Division this period, conducted research and development in the following areas: advanced batteries and fuel cells; fluidized-bed combustion and coal-fired magnetohydrodynamics; treatment of hazardous waste and mixed hazardous/radioactive waste; reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; separating and recovering transuranic elements, concentrating radioactive waste streams with advanced evaporators, and producing {sup 99}Mo from low-enriched uranium; recovering actinide from IFR core and blanket fuel in removing fission products from recycled fuel, and disposing removal of actinides in spent fuel from commercial water-cooled nuclear reactors; and physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, molecular sieve structures, thin-film diamond surfaces, effluents from wood combustion, and molten silicates; and the geochemical processes involved in water-rock interactions. The Analytical Chemistry Laboratory in CMT also provides a broad range of analytical chemistry support.

  15. Time-division SQUID multiplexers

    NASA Astrophysics Data System (ADS)

    Irwin, K. D.; Vale, L. R.; Bergren, N. E.; Deiker, S.; Grossman, E. N.; Hilton, G. C.; Nam, S. W.; Reintsema, C. D.; Rudman, D. A.; Huber, M. E.

    2002-02-01

    SQUID multiplexers make it possible to build arrays of thousands of low-temperature bolometers and microcalorimeters based on superconducting transition-edge sensors with a manageable number of readout channels. We discuss the technical tradeoffs between proposed time-division multiplexer and frequency-division multiplexer schemes and motivate our choice of time division. Our first-generation SQUID multiplexer is now in use in an astronomical instrument. We describe our second-generation SQUID multiplexer, which is based on a new architecture that significantly reduces the dissipation of power at the first stage, allowing thousands of SQUIDs to be operated at the base temperature of a cryostat. .

  16. Physics division annual report 2006.

    SciTech Connect

    Glover, J.; Physics

    2008-02-28

    This report highlights the activities of the Physics Division of Argonne National Laboratory in 2006. The Division's programs include the operation as a national user facility of ATLAS, the Argonne Tandem Linear Accelerator System, research in nuclear structure and reactions, nuclear astrophysics, nuclear theory, investigations in medium-energy nuclear physics as well as research and development in accelerator technology. The mission of nuclear physics is to understand the origin, evolution and structure of baryonic matter in the universe--the core of matter, the fuel of stars, and the basic constituent of life itself. The Division's research focuses on innovative new ways to address this mission.

  17. Division rules for polygonal cells.

    PubMed

    Cowan, R; Morris, V B

    1988-03-07

    A number of fascinating mathematical problems concerning the division of two-dimensional space are formulated from questions about the planes of cell division in embryonic epithelia. Their solution aids in the quantitative description of cellular arrangement in epithelia. Cells, considered as polygons, site their division line according to stochastic rules, eventually forming a tessellation of the plane. The equilibrium distributions for the resulting mix of polygonal types are explored for a range of stochastic rules. We find surprising links with some classical distributions from the theory of probability.

  18. Division Iii: Planetary Systems Sciences

    NASA Astrophysics Data System (ADS)

    Meech, Karen; Valsecchi, Giovanni; Bowell, Edward L.; Bockelee-Morvan, Dominique; Boss, Alan; Cellino, Alberto; Consolmagno, Guy; Fernandez, Julio; Irvine, William; Lazzaro, Daniela; Michel, Patrick; Noll, Keith; Schulz, Rita; Watanabe, Jun-ichi; Yoshikawa, Makoto; Zhu, Jin

    2012-04-01

    Division III, with 1126 members, is the third largest of the 12 IAU Divisions, focusing on subject matter related to the physical study of interplanetary dust, comets, minor planets, satellites, planets, planetary systems and astrobiology. Within the Division are very active working groups that are responsible for planetary system and small body nomenclature, as well as a newly created working group on Near Earth Objects which was established order to investigate the requirements for international ground-and/or space-based NEO surveys to characterize 90% of all NEOs with diameters >40m in order to establish a permanent international NEO Early Warning System.

  19. Environmental Sciences Division annual progress report for period ending September 30, 1982. Environmental Sciences Division Publication No. 2090. [Lead abstract

    SciTech Connect

    Not Available

    1983-04-01

    Separate abstracts were prepared for 12 of the 14 sections of the Environmental Sciences Division annual progress report. The other 2 sections deal with educational activities. The programs discussed deal with advanced fuel energy, toxic substances, environmental impacts of various energy technologies, biomass, low-level radioactive waste management, the global carbon cycle, and aquatic and terrestrial ecology. (KRM)

  20. High Energy Physics Division semiannual report of research activities, July 1, 1991--December 31, 1991

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1992-04-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1991--December 31, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  1. High Energy Physics Division semiannual report of research activities, July 1, 1993--December 31, 1993

    SciTech Connect

    Wagner, R.; Moonier, P.; Schoessow, P.; Talaga, R.

    1994-05-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1993--December 31, 1993. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  2. High Energy Physics Division semiannual report of research activities, January 1, 1994--June 30, 1994

    SciTech Connect

    Not Available

    1994-09-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1994-June 30, 1994. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  3. High Energy Physics Division semiannual report of research activities, January 1, 1993--June 30, 1993

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1993-12-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1993--June 30, 1993. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  4. High Energy Physics Division semiannual report of research activities, July 1, 1991--December 31, 1991

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1992-04-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1991--December 31, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  5. High Energy Physics Division semiannual report of research activities, January 1, 1992--June 30, 1992

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1992-11-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1992--June 30, 1992. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  6. High Energy Physics Division semiannual report of research activities, July 1, 1992--December 30, 1992

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1993-07-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1992--December 30, 1992. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  7. High Energy Physics Division. Semiannual report of research activities, January 1, 1995--June 30, 1995

    SciTech Connect

    Wagner, R.; Schoessow, P.; Talaga, R.

    1995-12-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1995-July 31, 1995. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  8. High Energy Physics Division semiannual report of research activities, July 1, 1994--December 31, 1994

    SciTech Connect

    Wagner, R.; Schoessow, P.; Talaga, R.

    1995-04-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1994--December 31, 1994. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  9. High Energy Physics division semiannual report of research activities, January 1, 1998--June 30, 1998.

    SciTech Connect

    Ayres, D. S.; Berger, E. L.; Blair, R.; Bodwin, G. T.; Drake, G.; Goodman, M. C.; Guarino, V.; Klasen, M.; Lagae, J.-F.; Magill, S.; May, E. N.; Nodulman, L.; Norem, J.; Petrelli, A.; Proudfoot, J.; Repond, J.; Schoessow, P. V.; Sinclair, D. K.; Spinka, H. M.; Stanek, R.; Underwood, D.; Wagner, R.; White, A. R.; Yokosawa, A.; Zachos, C.

    1999-03-09

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1998 through June 30, 1998. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included.

  10. High Energy Physics Division semiannual report of research activities July 1, 1997 - December 31, 1997.

    SciTech Connect

    Norem, J.; Rezmer, R.; Schuur, C.; Wagner, R.

    1998-08-11

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1997--December 31, 1997. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included.

  11. High Energy Physics Division semiannual report of research activities, January 1, 1996--June 30, 1996

    SciTech Connect

    Norem, J.; Rezmer, R.; Wagner, R.

    1997-07-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1 - June 30, 1996. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. List of Division publications and colloquia are included.

  12. "Structure and dynamics in complex chemical systems: Gaining new insights through recent advances in time-resolved spectroscopies.” ACS Division of Physical Chemistry Symposium presented at the Fall National ACS Meeting in Boston, MA, August 2015

    SciTech Connect

    Crawford, Daniel

    2016-09-26

    8-Session Symposium on STRUCTURE AND DYNAMICS IN COMPLEX CHEMICAL SYSTEMS: GAINING NEW INSIGHTS THROUGH RECENT ADVANCES IN TIME-RESOLVED SPECTROSCOPIES. The intricacy of most chemical, biochemical, and material processes and their applications are underscored by the complex nature of the environments in which they occur. Substantial challenges for building a global understanding of a heterogeneous system include (1) identifying unique signatures associated with specific structural motifs within the heterogeneous distribution, and (2) resolving the significance of each of multiple time scales involved in both small- and large-scale nuclear reorganization. This symposium focuses on the progress in our understanding of dynamics in complex systems driven by recent innovations in time-resolved spectroscopies and theoretical developments. Such advancement is critical for driving discovery at the molecular level facilitating new applications. Broad areas of interest include: Structural relaxation and the impact of structure on dynamics in liquids, interfaces, biochemical systems, materials, and other heterogeneous environments.

  13. An Analysis of Division Commander Lessons Learned

    DTIC Science & Technology

    1989-02-24

    Department of the Army. Division Command Lessons Learned Program : Experiences in Division Command. Carlisle Barracks: U.S. Army Military History Institute...1985. 2. U.S. Department of the Army. Division Command Lessons Learned Program : Experiences in Division Command. Carlisle Barracks: U.S. Army...Military History Institute, 1986. 3. U.S. Department of the Army. Division Command Lessons Learned Program : Experiences in Division Command. Carlisle 4

  14. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1984-07-01

    E (Experimental Physics) Division carries out basic and applied research in atomic and nuclear physics, in materials science, and in other areas related to the missions of the Laboratory. Some of the activities are cooperative efforts with other divisions of the Laboratory, and, in a few cases, with other laboratories. Many of the experiments are directly applicable to problems in weapons and energy, some have only potential applied uses, and others are in pure physics. This report presents abstracts of papers published by E (Experimental Physics) Division staff members between July 1983 and June 1984. In addition, it lists the members of the scientific staff of the division, including visitors and students, and some of the assignments of staff members on scientific committees. A brief summary of the budget is included.

  15. Division II: Sun and Heliosphere

    NASA Astrophysics Data System (ADS)

    Webb, David F.; Melrose, Donald B.; Benz, Arnold O.; Bogdan, Thomas J.; Bougeret, Jean-Louis; Klimchuk, James A.; Martinez-Pillet, Valentin

    2007-12-01

    Division II provides a forum for astronomers studying a wide range of problems related to the structure, radiation and activity of the Sun, and its interaction with the Earth and the rest of the solar system.

  16. Division II: Sun and Heliosphere

    NASA Astrophysics Data System (ADS)

    Melrose, Donald B.; Martinez Pillet, Valentin; Webb, David F.; Bougeret, Jean-Louis; Klimchuk, James A.; Kosovichev, Alexander; van Driel-Gesztelyi, Lidia; von Steiger, Rudolf

    2010-05-01

    This report is on activities of the Division at the General Assembly in Rio de Janeiro. Summaries of scientific activities over the past triennium have been published in Transactions A, see Melrose et al. (2008), Klimchuk et al. (2008), Martinez Pillet et al. (2008) and Bougeret et al. (2008). The business meeting of the three Commissions were incorporated into the business meeting of the Division. This report is based in part on minutes of the business meeting, provided by the Secretary of the Division, Lidia van Driel-Gesztelyi, and it also includes reports provided by the Presidents of the Commissions (C10, C12, C49) and of the Working Groups (WGs) in the Division.

  17. Division III: Planetary Systems Science

    NASA Astrophysics Data System (ADS)

    Bowell, Edward L. G.; Meech, Karen J.; Williams, Iwan P.; Boss, Alan; Courtin, Régis; Gustafson, Bo Å. S.; Levasseur-Regourd, Anny-Chantal; Mayor, Michel; Spurný, Pavel; Watanabe, Jun-ichi; Consolmagno, Guy J.; Fernández, Julio A.; Huebner, Walter F.; Marov, Mikhail Ya.; Schulz, Rita M.; Valsecchi, Giovanni B.; Witt, Adolf N.

    2010-05-01

    The meeting was opened by Ted Bowell, president, at 11 am. The 2006 Division III meetings were reviewed by Guy Consolmagno, secretary; as the minutes of those meetings have already been published, they were assumed to be approved.

  18. Division Iii: Planetary System Sciences

    NASA Astrophysics Data System (ADS)

    Williams, Iwan P.; Bowell, Edward L. G.; Marov, Mikhail Ya.; Consolmagno, Guy J.; A'Hearn, Michael F.; Boss, Alan P.; Cruikshank, Dale P.; Levasseur-Regord, Anny-Chantal; Morrison, David; Tinney, Christopher G.

    2007-12-01

    Division III gathers astronomers engaged in the study of a comprehensive range of phenomena in the solar system and its bodies, from the major planets via comets to meteorites and interplanetary dust.

  19. The ASCI Network for SC '98: Dense Wave Division Multiplexing for Distributed and Distance Computing

    SciTech Connect

    Adams, R.L.; Butman, W.; Martinez, L.G.; Pratt, T.J.; Vahle, M.O.

    1999-06-01

    This document highlights the DISCOM's Distance computing and communication team activities at the 1998 Supercomputing conference in Orlando, Florida. This conference is sponsored by the IEEE and ACM. Sandia National Laboratories, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory have participated in this conference for ten years. For the last three years, the three laboratories have a joint booth at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives. The DISCOM communication team uses the forum to demonstrate and focus communications and networking developments. At SC '98, DISCOM demonstrated the capabilities of Dense Wave Division Multiplexing. We exhibited an OC48 ATM encryptor. We also coordinated the other networking activities within the booth. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support overall strategies in ATM networking.

  20. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1983-07-01

    This report describes some of the activities in E (Experimental Physics) Division during the past year. E-division carries out research and development in areas related to the missions of the Laboratory. Many of the activities are in pure and applied atomic and nuclear physics and in materials science. In addition, this report describes development work on accelerators and on instrumentation for plasma diagnostics, nitrogen exchange rates in tissue, and breakdown in gases by microwave pulses.

  1. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1981-07-01

    This report describes some of the activities in E (Experimental Physics) Division during the past year. E-Division carries out research and development in areas related to the missions of the Laboratory. Many of the activities are in pure and applied atomic and nuclear physics and in material science. In addition this report describes work on accelerators, microwaves, plasma diagnostics, determination of atmospheric oxygen and of nitrogen in tissue.

  2. NASA Planetary Science Division's Instrument Development Programs, PICASSO and MatISSE

    NASA Astrophysics Data System (ADS)

    Gaier, J. R.

    2016-10-01

    The NASA Planetary Science Division's instrument development programs, Planetary Instrument Concept Advancing Solar System Observations (PICASSO), and Maturation of Instruments for Solar System Exploration Program (MatISSE), are described.

  3. Overview of NASA Glenn Research Center's Communications and Intelligent Systems Division

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2016-01-01

    The Communications and Intelligent Systems Division provides expertise, plans, conducts and directs research and engineering development in the competency fields of advanced communications and intelligent systems technologies for application in current and future aeronautics and space systems.

  4. Accelerator & Fusion Research Division: 1993 Summary of activities

    SciTech Connect

    Chew, J.

    1994-04-01

    The Accelerator and Fusion Research Division (AFRD) is not only one of the largest scientific divisions at LBL, but also the one of the most diverse. Major efforts include: (1) investigations in both inertial and magnetic fusion energy; (2) operation of the Advanced Light Source, a state-of-the-art synchrotron radiation facility; (3) exploratory investigations of novel radiation sources and colliders; (4) research and development in superconducting magnets for accelerators and other scientific and industrial applications; and (5) ion beam technology development for nuclear physics and for industrial and biomedical applications. Each of these topics is discussed in detail in this book.

  5. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    SciTech Connect

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  6. Seismic Sensors to Supercomputers: Internet Mapping and Computational Tools for Teaching and Learning about Earthquakes and the Structure of the Earth from Seismology

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Seber, D.; Hamburger, M.

    2004-12-01

    The Internet has become an integral resource in the classrooms and homes of teachers and students. Widespread Web-access to seismic data and analysis tools enhances opportunities for teaching and learning about earthquakes and the structure of the earth from seismic tomography. We will present an overview and demonstration of the UNAVCO Voyager Java- and Javascript-based mapping tools (jules.unavco.org) and the Cornell University/San Diego Supercomputer Center (www.discoverourearth.org) Java-based data analysis and mapping tools. These map tools, datasets, and related educational websites have been developed and tested by collaborative teams of scientific programmers, research scientists, and educators. Dual-use by research and education communities ensures persistence of the tools and data, motivates on-going development, and encourages fresh content. With these tools are curricular materials and on-going evaluation processes that are essential for an effective application in the classroom. The map tools provide not only seismological data and tomographic models of the earth's interior, but also a wealth of associated map data such as topography, gravity, sea-floor age, plate tectonic motions and strain rates determined from GPS geodesy, seismic hazard maps, stress, and a host of geographical data. These additional datasets help to provide context and enable comparisons leading to an integrated view of the planet and the on-going processes that shape it. Emerging Cyberinfrastructure projects such as the NSF-funded GEON Information Technology Research project (www.geongrid.org) are developing grid/web services, advanced visualization software, distributed databases and data sharing methods, concept-based search mechanisms, and grid-computing resources for earth science and education. These developments in infrastructure seek to extend the access to data and to complex modeling tools from the hands of a few researchers to a much broader set of users. The GEON

  7. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  8. Beyond Cookies: Understanding Various Division Models

    ERIC Educational Resources Information Center

    Jong, Cindy; Magruder, Robin

    2014-01-01

    Having a deeper understanding of division derived from multiple models is of great importance for teachers and students. For example, students will benefit from a greater understanding of division contexts as they study long division, fractions, and division of fractions. The purpose of this article is to build on teachers' and students'…

  9. Beyond Cookies: Understanding Various Division Models

    ERIC Educational Resources Information Center

    Jong, Cindy; Magruder, Robin

    2014-01-01

    Having a deeper understanding of division derived from multiple models is of great importance for teachers and students. For example, students will benefit from a greater understanding of division contexts as they study long division, fractions, and division of fractions. The purpose of this article is to build on teachers' and students'…

  10. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  11. Chemical technology division: Annual technical report 1987

    SciTech Connect

    Not Available

    1988-05-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1987 are presented. In this period, CMT conducted research and development in the following areas: (1) high-performance batteries--mainly lithium-alloy/metal sulfide and sodium/sulfur; (2) aqueous batteries (lead-acid, nickel/iron, etc.); (3) advanced fuel cells with molten carbonate or solid oxide electrolytes; (4) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants and the technology for fluidized-bed combustion; (5) methods for the electromagnetic continuous casting of steel sheet and for the purification of ferrous scrap; (6) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (7) nuclear technology related to a process for separating and recovering transuranic elements from nuclear waste, the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor, and waste management; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for liquids and vapors at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; the thermochemistry of various minerals; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be the major user of the technical support provided by the Analytical Chemistry Laboratory at ANL. 54 figs., 9 tabs.

  12. Laboratory Astrophysics Division of the AAS (LAD)

    NASA Technical Reports Server (NTRS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-01-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  13. Laboratory Astrophysics Division of The AAS (LAD)

    NASA Astrophysics Data System (ADS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-10-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  14. Chemical Technology Division annual technical report 1989

    SciTech Connect

    Not Available

    1990-03-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1989 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including high-performance batteries (mainly lithium/iron sulfide and sodium/metal chloride), aqueous batteries (lead-acid and nickel/iron), and advanced fuel cells with molten carbonate and solid oxide electrolytes: (2) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants and the technology for fluidized-bed combustion; (3) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (4) nuclear technology related to a process for separating and recovering transuranic elements from nuclear waste and for producing {sup 99}Mo from low-enriched uranium targets, the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor (the Integral Fast Reactor), and waste management; and (5) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for superconducting oxides and associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be administratively responsible for and the major user of the Analytical Chemistry Laboratory at Argonne National Laboratory (ANL).

  15. Chemical Technology Division annual technical report, 1986

    SciTech Connect

    Not Available

    1987-06-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1986 are presented. In this period, CMT conducted research and development in areas that include the following: (1) high-performance batteries - mainly lithium-alloy/metal sulfide and sodium/sulfur; (2) aqueous batteries (lead-acid, nickel/iron, etc.); (3) advanced fuel cells with molten carbonate or solid oxide electrolytes; (4) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants, the technology for fluidized-bed combustion, and a novel concept for CO/sub 2/ recovery from fossil fuel combustion; (5) methods for recovery of energy from municipal waste; (6) methods for the electromagnetic continuous casting of steel sheet; (7) techniques for treatment of hazardous waste such as reactive metals and trichloroethylenes; (8) nuclear technology related to waste management, a process for separating and recovering transuranic elements from nuclear waste, and the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor; and (9) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of catalytic hydrogenation and catalytic oxidation; materials chemistry for associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, surface science, and catalysis; the thermochemistry of zeolites and related silicates; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be the major user of the technical support provided by the Analytical Chemistry Laboratory at ANL. 127 refs., 71 figs., 8 tabs.

  16. Why do we need supercomputers to understand the electrocardiographic T wave?

    PubMed

    Potse, Mark; LeBlanc, A-Robert; Vinet, Alain

    2007-07-01

    Propagation of depolarisation and repolarisation in myocardium results from an interplay of membrane potential, transmembrane current, and intercellular current. This process can be represented mathematically with a reaction-diffusion (RD) equation. Solving RD equations for a whole heart requires a supercomputer. Therefore, earlier models used predefined action potential (AP) shapes and fixed propagation velocities. We discuss why RD models are important when T waves are studied. We simulated propagating AP with an RD model of the human heart, which included heterogeneity of membrane properties. Computed activation times served as input to a model that used predefined AP, and to a "hybrid model" that computed AP only during repolarisation. The hybrid model was tested with different spatial resolutions. Electrocardiograms (ECGs) were computed with all three models. Computed QRS complexes were practically identical in all models. T waves in the fixed-AP model had 20 to 40% larger amplitudes in leads V1-V3. The hybrid model produced the same T waves as the RD model at 0.25-mm resolution, but underestimated T-wave amplitude at lower resolutions. Fixed AP waveforms in a forward ECG model lead to exaggerated T waves. Hybrid models require the same high spatial resolution as RD models.

  17. Simulating the Dynamics of Earth's Core: Using NCCS Supercomputers Speeds Calculations

    NASA Technical Reports Server (NTRS)

    2002-01-01

    If one wanted to study Earth's core directly, one would have to drill through about 1,800 miles of solid rock to reach liquid core-keeping the tunnel from collapsing under pressures that are more than 1 million atmospheres and then sink an instrument package to the bottom that could operate at 8,000 F with 10,000 tons of force crushing every square inch of its surface. Even then, several of these tunnels would probably be needed to obtain enough data. Faced with difficult or impossible tasks such as these, scientists use other available sources of information - such as seismology, mineralogy, geomagnetism, geodesy, and, above all, physical principles - to derive a model of the core and, study it by running computer simulations. One NASA researcher is doing just that on NCCS computers. Physicist and applied mathematician Weijia Kuang, of the Space Geodesy Branch, and his collaborators at Goddard have what he calls the,"second - ever" working, usable, self-consistent, fully dynamic, three-dimensional geodynamic model (see "The Geodynamic Theory"). Kuang runs his model simulations on the supercomputers at the NCCS. He and Jeremy Bloxham, of Harvard University, developed the original version, written in Fortran 77, in 1996.

  18. A user-friendly web portal for T-Coffee on supercomputers

    PubMed Central

    2011-01-01

    Background Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution. PMID:21569428

  19. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    SciTech Connect

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  20. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    SciTech Connect

    Vranas, P; Soltz, R

    2006-10-19

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD.