Science.gov

Sample records for advanced supercomputing division

  1. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  2. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  3. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2016-07-12

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  4. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    SciTech Connect

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  5. Advanced concepts and missions division publications, 1971

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This report is part of a series of annual papers on Advanced Concepts and Missions Division (ACMD) publications. It contains a bibliography and corresponding abstract of all papers presented or published by personnel of ACMD during the calendar year 1971. Also included are abstracts of final reports ACMD contracted studies perfomed during this time period.

  6. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  7. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  8. An assessment of worldwide supercomputer usage

    SciTech Connect

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  9. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  10. Technology transfer in the NASA Ames Advanced Life Support Division

    NASA Technical Reports Server (NTRS)

    Connell, Kathleen; Schlater, Nelson; Bilardo, Vincent; Masson, Paul

    1992-01-01

    This paper summarizes a representative set of technology transfer activities which are currently underway in the Advanced Life Support Division of the Ames Research Center. Five specific NASA-funded research or technology development projects are synopsized that are resulting in transfer of technology in one or more of four main 'arenas:' (1) intra-NASA, (2) intra-Federal, (3) NASA - aerospace industry, and (4) aerospace industry - broader economy. Each project is summarized as a case history, specific issues are identified, and recommendations are formulated based on the lessons learned as a result of each project.

  11. Supercomputing in Aerospace

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Yee, Helen

    1987-01-01

    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.

  12. Emerging supercomputer architectures

    SciTech Connect

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  13. The use of supercomputers in stellar dynamics; Proceedings of the Workshop, Institute for Advanced Study, Princeton, NJ, June 2-4, 1986

    NASA Astrophysics Data System (ADS)

    Hut, Piet; McMillan, Stephen L. W.

    Various papers on the use of supercomputers in stellar dynamics are presented. Individual topics addressed include: dynamical evolution of globular clusters, disk galaxy dynamics on the computer, mathematical models of star cluster dynamics, models of hot stellar systems, supercomputers and large cosmological N-body simulations, the architecture of a homogeneous vector supercomputer, the BBN multiprocessors Butterfly and Monarch, the Connection Machine, a digital Orrery, and the outer solar system for 200 million years. Also considered are: application of smooth particle hydrodynamics theory to lunar origin, multiple mesh techniques for modeling interacting galaxies, numerical experiments on galactic halo formation, numerical integration using explicit Taylor series, multiple-mesh-particle scheme for N-body simulation, direct N-body simulation on supercomputers, vectorization of small-N integrators, N-body integrations using supercomputers, a gridless Fourier method, techniques and tricks for N-body computation.

  14. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  15. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  16. Energy Efficient Supercomputing

    SciTech Connect

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  17. Supercomputing the Climate

    NASA Video Gallery

    Goddard Space Flight Center is the home of a state-of-the-art supercomputing facility called the NASA Center for Climate Simulation (NCCS) that is capable of running highly complex models to help s...

  18. Reversible logic for supercomputing.

    SciTech Connect

    DeBenedictis, Erik P.

    2005-05-01

    This paper is about making reversible logic a reality for supercomputing. Reversible logic offers a way to exceed certain basic limits on the performance of computers, yet a powerful case will have to be made to justify its substantial development expense. This paper explores the limits of current, irreversible logic for supercomputers, thus forming a threshold above which reversible logic is the only solution. Problems above this threshold are discussed, with the science and mitigation of global warming being discussed in detail. To further develop the idea of using reversible logic in supercomputing, a design for a 1 Zettaflops supercomputer as required for addressing global climate warming is presented. However, to create such a design requires deviations from the mainstream of both the software for climate simulation and research directions of reversible logic. These deviations provide direction on how to make reversible logic practical.

  19. Energy Efficient Supercomputing

    ScienceCinema

    Anypas, Katie

    2016-07-12

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  20. Advances in nickel hydrogen technology at Yardney Battery Division

    NASA Technical Reports Server (NTRS)

    Bentley, J. G.; Hall, A. M.

    1987-01-01

    The current major activites in nickel hydrogen technology being addressed at Yardney Battery Division are outlined. Five basic topics are covered: an update on life cycle testing of ManTech 50 AH NiH2 cells in the LEO regime; an overview of the Air Force/industry briefing; nickel electrode process upgrading; 4.5 inch cell development; and bipolar NiH2 battery development.

  1. A training program for scientific supercomputing users

    SciTech Connect

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  2. Super problems for supercomputers

    SciTech Connect

    Peterson, I.

    1984-01-01

    This article discusses the ways in which simulations performed on high-speed computers combined with graphics are replacing experiments. Supercomputers ranging from the large, general-purpose Cray-1 and the CYBER 205 to machines designed for a specific type of calculation, are becoming essential research tools in many fields of science and engineering. Topics considered include crystal growth, aerodynamic design, molecular seismology, computer graphics, membrane design, quantum mechanical calculations, Soviet ''nuclear winter'' maps (modeling climate in a post-nuclear-war environment), and estimating nuclear forest fires. It is pointed out that the $15 million required to buy and support one supercomputer has limited its use in industry and universities.

  3. Energy sciences supercomputing 1990

    SciTech Connect

    Mirin, A.A.; Kaiper, G.V.

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  4. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2016-07-12

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  5. Supercomputers: Super-polluters?

    SciTech Connect

    Mills, Evan; Mills, Evan; Tschudi, William; Shalf, John; Simon, Horst

    2008-04-08

    Thanks to imperatives for limiting waste heat, maximizing performance, and controlling operating cost, energy efficiency has been a driving force in the evolution of supercomputers. The challenge going forward will be to extend these gains to offset the steeply rising demands for computing services and performance.

  6. Ice Storm Supercomputer

    SciTech Connect

    2009-01-01

    "A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed 'Ice Storm,' this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen." For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  7. Ice Storm Supercomputer

    ScienceCinema

    None

    2016-07-12

    "A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed 'Ice Storm,' this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen." For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  8. Predicting Hurricanes with Supercomputers

    SciTech Connect

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  9. Advanced Reactor Safety Research Division. Quarterly progress report, April 1-June 30, 1980

    SciTech Connect

    Romano, A.J.

    1980-01-01

    The Advanced Reactor Safety Research Programs Quarterly Progress Report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: HTGR safety evaluation, SSC Code Development, LMFBR Safety Experiments, and Fast Reactor Safety Code Validation.

  10. Advanced Reactor Safety Research Division. Quarterly progress report, January 1-March 31, 1980

    SciTech Connect

    Agrawal, A.K.; Cerbone, R.J.; Sastre, C.

    1980-06-01

    The Advanced Reactor Safety Research Programs quarterly progress report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: HTGR Safety Evaluation, SSC Code Development, LMFBR Safety Experiments, and Fast Reactor Safety Code Validation.

  11. What Is the Relationship between Emotional Intelligence and Administrative Advancement in an Urban School Division?

    ERIC Educational Resources Information Center

    Roberson, Elizabeth W.

    2010-01-01

    The purpose of this research was to study the relationship between emotional intelligence and administrative advancement in one urban school division; however, data acquired in the course of study may have revealed areas that could be further developed in future studies to increase the efficacy of principals and, perhaps, to inform the selection…

  12. Beowulf Supercomputers: Scope and Trends

    NASA Astrophysics Data System (ADS)

    Ahmed, Maqsood; Saeed, M. Alam; Ahmed, Rashid; Fazal-e-Aleem

    2005-03-01

    As we have entered in twenty 1st century, a century of information technology, the need of supercomputing is expanding in many fields of science and technology. With the availability of low cost commodity hardware, free software, Beowulf style supercomputers have solved the problem of scientific community. The supercomputer helps in solving complex problems, store, process, and manage huge amount of scientific data available all over the globe. In this paper we have tried to discuss the functioning of Beowulf style supercomputer, its scope and future trends.

  13. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2016-07-12

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  14. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  15. Overview of the I-way : wide area visual supercomputing.

    SciTech Connect

    DeFanti, T. A.; Foster, I.; Papka, M. E.; Stevens, R.; Kuhfuss, T.; Univ. of Illinois at Chicago

    1996-01-01

    This paper discusses the I-WAY project and provides an overview of the papers in this issue of IJSA. The I-WAY is an experimental environment for building distributed virtual reality applications and for exploring issues of distributed wide area resource management and scheduling. The goal of the I-WAY project is to enable researchers use multiple internetworked supercomputers and advanced visualization systems to conduct very large-scale computations. By connecting a dozen ATM testbeds, seventeen supercomputer centers, five virtual reality research sites, and over sixty applications groups, the I-WAY project has created an extremely diverse wide area environment for exploring advanced applications. This environment has provided a glimpse of the future for advanced scientific and engineering computing. 1 A Model for Distributed Collaborative Computing The I-WAY, or Information Wide Area Year, was a year-long effort to link existing national testbeds based on ATM (asynchronous transfer mode) to interconnect supercomputer centers, virtual reality (VR) research locations, and applications development sites. The I-WAY was successfully demonstrated at Supercomputing '95 and included over sixty distributed supercomputing applications that used a variety of supercomputing resources and VR display.

  16. Enabling department-scale supercomputing

    SciTech Connect

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  17. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  18. Supercomputing Sheds Light on the Dark Universe

    SciTech Connect

    Salman Habib

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named a finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.

  19. Sandia`s network for Supercomputer `96: Linking supercomputers in a wide area Asynchronous Transfer Mode (ATM) network

    SciTech Connect

    Pratt, T.J.; Martinez, L.G.; Vahle, M.O.

    1997-04-01

    The advanced networking department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past several years as a forum to demonstrate and focus communication and networking developments. At Supercomputing 96, for the first time, Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory combined their Supercomputing 96 activities within a single research booth under the ASO banner. Sandia provided the network design and coordinated the networking activities within the booth. At Supercomputing 96, Sandia elected: to demonstrate wide area network connected Massively Parallel Processors, to demonstrate the functionality and capability of Sandia`s new edge architecture, to demonstrate inter-continental collaboration tools, and to demonstrate ATM video capabilities. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia`s overall strategies in ATM networking.

  20. Supercomputer debugging workshop 1991 proceedings

    SciTech Connect

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  1. Supercomputer debugging workshop 1991 proceedings

    SciTech Connect

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  2. Advancing Research and Practice: The Revised APA Division 30 Definition of Hypnosis.

    PubMed

    Elkins, Gary R; Barabasz, Arreed F; Council, James R; Spiegel, David

    2015-04-01

    This article describes the history, rationale, and guidelines for developing a new definition of hypnosis by the Society of Psychological Hypnosis, Division 30 of the American Psychological Association. The definition was developed with the aim of being concise, being heuristic, and allowing for alternative theories of the mechanisms (to be determined in empirical scientific study). The definition of hypnosis is presented as well as definitions of the following related terms: hypnotic induction, hypnotizability, and hypnotherapy. The implications for advancing research and practice are discussed. The definitions are presented within the article.

  3. Advancing research and practice: the revised APA Division 30 definition of hypnosis.

    PubMed

    Elkins, Gary R; Barabasz, Arreed F; Council, James R; Spiegel, David

    2015-01-01

    This article describes the history, rationale, and guidelines for developing a new definition of hypnosis by the Society of Psychological Hypnosis, Division 30 of the American Psychological Association. The definition was developed with the aim of being concise, heuristic, and allowing for alternative theories of the mechanisms (to be determined in empirical scientific study). The definition of hypnosis is presented as well as definitions of the following related terms: hypnotic induction, hypnotizability, and hypnotherapy. The implications for advancing research and practice are discussed. The definitions are presented within the article.

  4. Storage needs in future supercomputer environments

    NASA Technical Reports Server (NTRS)

    Coleman, Sam

    1992-01-01

    The Lawrence Livermore National Laboratory (LLNL) is a Department of Energy contractor, managed by the University of California since 1952. Major projects at the Laboratory include the Strategic Defense Initiative, nuclear weapon design, magnetic and laser fusion, laser isotope separation, and weather modeling. The Laboratory employs about 8,000 people. There are two major computer centers: The Livermore Computer Center and the National Energy Research Supercomputer Center. As we increase the computing capacity of LLNL systems and develop new applications, the need for archival capacity will increase. Rather than quantify that increase, I will discuss the hardware and software architectures that we will need to support advanced applications.

  5. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  6. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2016-07-12

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  7. AICD -- Advanced Industrial Concepts Division Biological and Chemical Technologies Research Program. 1993 Annual summary report

    SciTech Connect

    Petersen, G.; Bair, K.; Ross, J.

    1994-03-01

    The annual summary report presents the fiscal year (FY) 1993 research activities and accomplishments for the United States Department of Energy (DOE) Biological and Chemical Technologies Research (BCTR) Program of the Advanced Industrial Concepts Division (AICD). This AICD program resides within the Office of Industrial Technologies (OIT) of the Office of Energy Efficiency and Renewable Energy (EE). The annual summary report for 1993 (ASR 93) contains the following: A program description (including BCTR program mission statement, historical background, relevance, goals and objectives), program structure and organization, selected technical and programmatic highlights for 1993, detailed descriptions of individual projects, a listing of program output, including a bibliography of published work, patents, and awards arising from work supported by BCTR.

  8. Super technology for tomorrow's supercomputers

    SciTech Connect

    Steiner, L.K.; Tate, D.P.

    1982-01-01

    In the past, it has been possible to achieve significant performance improvements in large computers simply by using newer, faster, or higher density components. However, as the rate of component improvement has slowed, we are being forced to rely on system architectural change to gain performance improvement. The authors examine the technologies required to design more parallel processing features into future supercomputers. 3 references.

  9. Tutorial: Parallel Simulation on Supercomputers

    SciTech Connect

    Perumalla, Kalyan S

    2012-01-01

    This tutorial introduces typical hardware and software characteristics of extant and emerging supercomputing platforms, and presents issues and solutions in executing large-scale parallel discrete event simulation scenarios on such high performance computing systems. Covered topics include synchronization, model organization, example applications, and observed performance from illustrative large-scale runs.

  10. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  11. Putting the super in supercomputers

    SciTech Connect

    Schulbach, C.

    1985-08-01

    Computers used for numerical simulations of physical phenomena, e.g., flowfields, meteorology, structural analysis, etc., replace physical experiments that are too expensive or impossible to perform. The problems considered continually become increasingly more complex and thus demand faster processing times to do all necessary computations. The effects components technologies have on computer speed are leveling off, leaving new architectures and programming as the only currently viable means to upgrade speed. Parallel computations, either in the form of array processors, assembly line processing or multiprocessors are being explored using existing microprocessor technologies. Slower hardware configurations can also be made equivalent to faster supercomputers by economic programming. The availability of rudimentary parallel architecture supercomputers for general industrial use is increasing. Scientific applications continue to drive the development of more sophisticated parallel machines.

  12. A Long History of Supercomputing

    SciTech Connect

    Grider, Gary

    2016-11-16

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  13. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  14. TOP500 Supercomputers for June 2005

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  15. TOP500 Supercomputers for June 2004

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  16. A Long History of Supercomputing

    ScienceCinema

    Grider, Gary

    2016-11-30

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  17. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  18. Naval Surface Warfare Center Dahlgren Division Technical Digest. Advanced Materials Technology

    DTIC Science & Technology

    1993-09-01

    Dahlgren Division, has demonstrated that certain lanthanide elements, e.g., terbium (Tb), dysprosium (Dy), and samarium (Sm), used individu- ally or in...values of magnetostrictions must vs applied field. Temperature = 770K. ,SIVC Dahlgrny Division alloys with x - 0.3. Samarium compounds, observed until...coil of wire. We place the coil in a ferrite cup where skin depth, 6, is in inches, resistivity, p, core to concentrate the electromagnetic field. is

  19. 61 FR 41181 - Vector Supercomputers From Japan

    Federal Register 2010, 2011, 2012, 2013, 2014

    1996-08-07

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Vector Supercomputers From Japan AGENCY: United States International Trade Commission. ACTION..., by reason of imports from Japan of vector supercomputers that are alleged to be sold in the...

  20. Low Cost Supercomputer for Applications in Physics

    NASA Astrophysics Data System (ADS)

    Ahmed, Maqsood; Ahmed, Rashid; Saeed, M. Alam; Rashid, Haris; Fazal-e-Aleem

    2007-02-01

    Using parallel processing technique and commodity hardware, Beowulf supercomputers can be built at a much lower cost. Research organizations and educational institutions are using this technique to build their own high performance clusters. In this paper we discuss the architecture and design of Beowulf supercomputer and our own experience of building BURRAQ cluster.

  1. Radioactive waste shipments to Hanford retrievable storage from Westinghouse Advanced Reactors and Nuclear Fuels Divisions, Cheswick, Pennsylvania

    SciTech Connect

    Duncan, D.; Pottmeyer, J.A.; Weyns, M.I.; Dicenso, K.D.; DeLorenzo, D.S.

    1994-04-01

    During the next two decades the transuranic (TRU) waste now stored in the burial trenches and storage facilities at the Hanford Sits in southeastern Washington State is to be retrieved, processed at the Waste Receiving and Processing Facility, and shipped to the Waste Isolation Pilot Plant (WIPP), near Carlsbad, New Mexico for final disposal. Approximately 5.7 percent of the TRU waste to be retrieved for shipment to WIPP was generated by the decontamination and decommissioning (D&D) of the Westinghouse Advanced Reactors Division (WARD) and the Westinghouse Nuclear Fuels Division (WNFD) in Cheswick, Pennsylvania and shipped to the Hanford Sits for storage. This report characterizes these radioactive solid wastes using process knowledge, existing records, and oral history interviews.

  2. Supercomputing 2002: NAS Demo Abstracts

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor)

    2002-01-01

    The hyperwall is a new concept in visual supercomputing, conceived and developed by the NAS Exploratory Computing Group. The hyperwall will allow simultaneous and coordinated visualization and interaction of an array of processes, such as a the computations of a parameter study or the parallel evolutions of a genetic algorithm population. Making over 65 million pixels available to the user, the hyperwall will enable and elicit qualitatively new ways of leveraging computers to accomplish science. It is currently still unclear whether we will be able to transport the hyperwall to SC02. The crucial display frame still has not been completed by the metal fabrication shop, although they promised an August delivery. Also, we are still working the fragile node issue, which may require transplantation of the compute nodes from the present 2U cases into 3U cases. This modification will increase the present 3-rack configuration to 5 racks.

  3. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  4. Spack: the Supercomputing Package Manager

    SciTech Connect

    Gamblin, T.

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstacle deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.

  5. What does Titan tell us about preparing for exascale supercomputers?

    NASA Astrophysics Data System (ADS)

    Wells, Jack

    2014-04-01

    Significant advances in computational astrophysics have occurred over the past half-decade with the appearance of supercomputers with petascale performance capabilities, and beyond. And significant technology developments are also occurring in addition to traditional CPU-based architectures in response to the growing energy requirements of these architectures, including graphical processing units (GPU), Cell processors, and other highly parallel, many-core processor technologies. There have been significant efforts to exploit these resources in the computational astrophysics research community. This talk will focus on recent results from breakthrough astrophysics simulations made possible by modern application software and leadership-class compute and data resources, give prospects and opportunities for today's petascale problems, and highlight the computation needs of astrophysics requiring an order of magnitude greater compute and data capability. We will focus on the early outcomes from the Department of Energy's Titan supercomputer managed by the Oak Ridge Leadership Computing Facility. Titan's Cray XK7 architecture has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. With its hybrid, accelerated architecture, Titan allows advanced scientific applications to reach speeds exceeding 20 petaflops with a marginal increase in electrical power demand over the previous generation leadership-class supercomputer.

  6. TOP500 Supercomputers for November 2003

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  7. The Pawsey Supercomputer geothermal cooling project

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  8. Proceedings of the first energy research power supercomputer users symposium

    SciTech Connect

    Not Available

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  9. Multiple crossbar network: Integrated supercomputing framework

    SciTech Connect

    Hoebelheinrich, R. )

    1989-01-01

    At Los Alamos National Laboratory, site of one of the world's most powerful scientific supercomputing facilities, a prototype network for an environment that links supercomputers and workstations is being developed. Driven by a need to provide graphics data at movie rates across a network from a Cray supercomputer to a Sun scientific workstation, the network is called the Multiple Crossbar Network (MCN). It is intended to be coarsely grained, loosely coupled, general-purpose interconnection network that will vastly increase the speed at which supercomputers communicate with each other in large networks. The components of the network are described, as well as work done in collaboration with vendors who are interested in providing commercial products. 9 refs.

  10. Funding boost for Santos Dumont supercomputer

    NASA Astrophysics Data System (ADS)

    Leite Vieira, Cássio

    2016-09-01

    The fastest supercomputer in Latin America returned to full usage last month following two months of minimal operations after Gilberto Kassab, Brazil's new science minister, agreed to plug a R4.6m (1.5m) funding gap.

  11. Graphics Flip Cube for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Gong, Chris; Reid, Lisa (Technical Monitor)

    1998-01-01

    Flip cube (constructed of heavy plastic) displays 11 graphics representing current projects or demos from 5 NASA centers participating in Supercomputing '98 (SC98). Included with the images are the URLS and names of the NASA centers.

  12. Supercomputing activities at the SSC Laboratory

    SciTech Connect

    Yan, Y.; Bourianoff, G.

    1991-09-01

    Supercomputers are used to simulate and track particle motion around the collider rings and the associated energy boosters of the Superconducting Super Collider (SSC). These numerical studies will aid in determining the best design for the SSC.

  13. Introducing Mira, Argonne's Next-Generation Supercomputer

    SciTech Connect

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  14. Simulating performance sensitivity of supercomputer job parameters.

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on the use of a supercomputer simulation to study the performance sensitivity to systematic changes in the job parameters of run time, number of CPUs, and interarrival time. We also examine the effect of changes in share allocation and service ratio for job prioritization under a Fair Share queuing Algorithm to see the effect on facility figures of merit. We used log data from the ASCI supercomputer Blue Mountain and the ASCI simulator BIRMinator to perform this study. The key finding is that the performance of the supercomputer is quite sensitive to all the job parameters with the interarrival rate of the jobs being most sensitive at the highest rates and increasing run times the least sensitive job parameter with respect to utilization and rapid turnaround. We also find that this facility is running near its maximum practical utilization. Finally, we show the importance of the use of simulation in understanding the performance sensitivity of a supercomputer.

  15. Simulating Galactic Winds on Supercomputers

    NASA Astrophysics Data System (ADS)

    Schneider, Evan

    2017-01-01

    Galactic winds are a ubiquitous feature of rapidly star-forming galaxies. Observations of nearby galaxies have shown that winds are complex, multiphase phenomena, comprised of outflowing gas at a large range of densities, temperatures, and velocities. Describing how starburst-driven outflows originate, evolve, and affect the circumgalactic medium and gas supply of galaxies is an important challenge for theories of galaxy evolution. In this talk, I will discuss how we are using a new hydrodynamics code, Cholla, to improve our understanding of galactic winds. Cholla is a massively parallel, GPU-based code that takes advantage of specialized hardware on the newest generation of supercomputers. With Cholla, we can perform large, three-dimensional simulations of multiphase outflows, allowing us to track the coupling of mass and momentum between gas phases across hundreds of parsecs at sub-parsec resolution. The results of our recent simulations demonstrate that the evolution of cool gas in galactic winds is highly dependent on the initial structure of embedded clouds. In particular, we find that turbulent density structures lead to more efficient mass transfer from cool to hot phases of the wind. I will discuss the implications of our results both for the incorporation of winds into cosmological simulations, and for interpretations of observed multiphase winds and the circumgalatic medium of nearby galaxies.

  16. TOP500 Supercomputers for November 2004

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  17. Misleading Performance Reporting in the Supercomputing Field

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Kutler, Paul (Technical Monitor)

    1992-01-01

    In a previous humorous note, I outlined twelve ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  18. Taking ASCI supercomputing to the end game.

    SciTech Connect

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  19. Academic and Career Advancement for Black Male Athletes at NCAA Division I Institutions

    ERIC Educational Resources Information Center

    Baker, Ashley R.; Hawkins, Billy J.

    2016-01-01

    This chapter examines the structural arrangements and challenges many Black male athletes encounter as a result of their use of sport for upward social mobility. Recommendations to enhance their preparation and advancement are provided.

  20. A modified orthodontic protocol for advanced periodontal disease in Class II division 1 malocclusion.

    PubMed

    Janson, Marcos; Janson, Guilherme; Murillo-Goizueta, Oscar Edwin Francisco

    2011-04-01

    An interdisciplinary approach is often the best option for achieving a predictable outcome for an adult patient with complex clinical problems. This case report demonstrates the combined periodontal/orthodontic treatment for a 49-year-old woman presenting with a Class II Division 1 malocclusion with moderate maxillary anterior crowding, a 9-mm overjet, and moderate to severe bone loss as the main characteristics of the periodontal disease. The orthodontic treatment included 2 maxillary first premolar extractions through forced extrusion. Active orthodontic treatment was completed in 30 months. The treatment outcomes, including the periodontal condition, were stable 17 months after active orthodontic treatment. The advantages of this interdisciplinary approach are discussed. Periodontally compromised orthodontic patients can be satisfactorily treated, achieving most of the conventional orthodontic goals, if a combined orthodontic/periodontic approach is used.

  1. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    SciTech Connect

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  2. Use of Convex supercomputers for flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1992-01-01

    The use of the Convex Computer Corporation supercomputers for flight simulation is discussed focusing on a real-time input/output system for supporting the flight simulation. The flight simulation computing system is based on two single processor control data corporation CYBER 175 computers, coupled through extended memory. The Advanced Real-Time Simulation System for digital data distribution and signal conversion is a state-of-the-art, high-speed fiber-optic-based, ring network system which is based on the computer automated measurement and control technology.

  3. 11th Annual NIH Pain Consortium Symposium on Advances in Pain Research | Division of Cancer Prevention

    Cancer.gov

    The NIH Pain Consortium will convene the 11th Annual NIH Pain Consortium Symposium on Advances in Pain Research, featuring keynote speakers and expert panel sessions on Innovative Models and Methods. The first keynote address will be delivered by David J. Clark, MD, PhD, Stanford University entitled “Challenges of Translational Pain Research: What Makes a Good Model?” |

  4. Palliative Care Improves Survival, Quality of Life in Advanced Lung Cancer | Division of Cancer Prevention

    Cancer.gov

    Results from the first randomized clinical trial of its kind have revealed a surprising and welcome benefit of early palliative care for patients with advanced lung cancer—longer median survival. Although several researchers said that the finding needs to be confirmed in other trials of patients with other cancer types, they were cautiously optimistic that the trial results could influence oncologists’ perceptions and use of palliative care. |

  5. TOP500 Supercomputers for June 2002

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  6. TOP500 Supercomputers for June 2003

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  7. Characterizing output bottlenecks in a supercomputer

    SciTech Connect

    Xie, Bing; Chase, Jeffrey; Dillow, David A; Drokin, Oleg; Klasky, Scott A; Oral, H Sarp; Podhorszki, Norbert

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic, contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.

  8. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  9. TOP500 Supercomputers for November 2002

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  10. Advanced Spatial-Division Multiplexed Measurement Systems Propositions-From Telecommunication to Sensing Applications: A Review.

    PubMed

    Weng, Yi; Ip, Ezra; Pan, Zhongqi; Wang, Ting

    2016-08-30

    The concepts of spatial-division multiplexing (SDM) technology were first proposed in the telecommunications industry as an indispensable solution to reduce the cost-per-bit of optical fiber transmission. Recently, such spatial channels and modes have been applied in optical sensing applications where the returned echo is analyzed for the collection of essential environmental information. The key advantages of implementing SDM techniques in optical measurement systems include the multi-parameter discriminative capability and accuracy improvement. In this paper, to help readers without a telecommunication background better understand how the SDM-based sensing systems can be incorporated, the crucial components of SDM techniques, such as laser beam shaping, mode generation and conversion, multimode or multicore elements using special fibers and multiplexers are introduced, along with the recent developments in SDM amplifiers, opto-electronic sources and detection units of sensing systems. The examples of SDM-based sensing systems not only include Brillouin optical time-domain reflectometry or Brillouin optical time-domain analysis (BOTDR/BOTDA) using few-mode fibers (FMF) and the multicore fiber (MCF) based integrated fiber Bragg grating (FBG) sensors, but also involve the widely used components with their whole information used in the full multimode constructions, such as the whispering gallery modes for fiber profiling and chemical species measurements, the screw/twisted modes for examining water quality, as well as the optical beam shaping to improve cantilever deflection measurements. Besides, the various applications of SDM sensors, the cost efficiency issue, as well as how these complex mode multiplexing techniques might improve the standard fiber-optic sensor approaches using single-mode fibers (SMF) and photonic crystal fibers (PCF) have also been summarized. Finally, we conclude with a prospective outlook for the opportunities and challenges of SDM

  11. Advanced time and wavelength division multiplexing for metropolitan area optical data communication networks

    NASA Astrophysics Data System (ADS)

    Watford, M.; DeCusatis, C.

    2005-09-01

    With the advent of new regulations governing the protection and recovery of sensitive business data, including the Sarbanes-Oxley Act, there has been a renewed interest in business continuity and disaster recovery applications for metropolitan area networks. Specifically, there has been a need for more efficient bandwidth utilization and lower cost per channel to facilitate mirroring of multi-terabit data bases. These applications have further blurred the boundary between metropolitan and wide area networks, with synchronous disaster recovery applications running up to 100 km and asynchronous solutions extending to 300 km or more. In this paper, we discuss recent enhancements in the Nortel Optical Metro 5200 Dense Wavelength Division Multiplexing (DWDM) platform, including features recently qualified for data communication applications such as Metro Mirror, Global Mirror, and Geographically Distributed Parallel Sysplex (GDPS). Using a 10 Gigabit/second (Gbit/s) backbone, this solution transports significantly more Fibre Channel protocol traffic with up to five times greater hardware density in the same physical package. This is also among the first platforms to utilize forward error correction (FEC) on the aggregate signals to improve bit error rate (BER) performance beyond industry standards. When combined with encapsulation into wide area network protocols, the use of FEC can compensate for impairments in BER across a service provider infrastructure without impacting application level performance. Design and implementation of these features will be discussed, including results from experimental test beds which validate these solutions for a number of applications. Future extensions of this environment will also be considered, including ways to provide configurable bandwidth on demand, mitigate Fibre Channel buffer credit management issues, and support for other GDPS protocols.

  12. Advanced Spatial-Division Multiplexed Measurement Systems Propositions—From Telecommunication to Sensing Applications: A Review

    PubMed Central

    Weng, Yi; Ip, Ezra; Pan, Zhongqi; Wang, Ting

    2016-01-01

    The concepts of spatial-division multiplexing (SDM) technology were first proposed in the telecommunications industry as an indispensable solution to reduce the cost-per-bit of optical fiber transmission. Recently, such spatial channels and modes have been applied in optical sensing applications where the returned echo is analyzed for the collection of essential environmental information. The key advantages of implementing SDM techniques in optical measurement systems include the multi-parameter discriminative capability and accuracy improvement. In this paper, to help readers without a telecommunication background better understand how the SDM-based sensing systems can be incorporated, the crucial components of SDM techniques, such as laser beam shaping, mode generation and conversion, multimode or multicore elements using special fibers and multiplexers are introduced, along with the recent developments in SDM amplifiers, opto-electronic sources and detection units of sensing systems. The examples of SDM-based sensing systems not only include Brillouin optical time-domain reflectometry or Brillouin optical time-domain analysis (BOTDR/BOTDA) using few-mode fibers (FMF) and the multicore fiber (MCF) based integrated fiber Bragg grating (FBG) sensors, but also involve the widely used components with their whole information used in the full multimode constructions, such as the whispering gallery modes for fiber profiling and chemical species measurements, the screw/twisted modes for examining water quality, as well as the optical beam shaping to improve cantilever deflection measurements. Besides, the various applications of SDM sensors, the cost efficiency issue, as well as how these complex mode multiplexing techniques might improve the standard fiber-optic sensor approaches using single-mode fibers (SMF) and photonic crystal fibers (PCF) have also been summarized. Finally, we conclude with a prospective outlook for the opportunities and challenges of SDM

  13. Intelligent supercomputers: the Japanese computer sputnik

    SciTech Connect

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  14. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  15. Roadrunner Supercomputer Breaks the Petaflop Barrier

    ScienceCinema

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2016-07-12

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  16. Roadrunner Supercomputer Breaks the Petaflop Barrier

    SciTech Connect

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2008-06-09

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  17. Virtual supercomputing on Macintosh desktop computers

    SciTech Connect

    Krovchuck, K.

    1996-05-01

    Many computing problems of today require supercomputer performance, but do not justify the costs needed to run such applications on supercomputers. In order to fill this need, networks of high-end workstations are often linked-together to act as a single virtual parallel supercomputer. This project attempts to develop software that will allow less expensive `desktop` computers to emulate a parallel supercomputer. To demonstrate the viability of the software, it is being integrated with POV, a retracing package that is both computationally expensive and easily modified for parallel systems. The software was developed using the Metrowerks Codewarrier Version 6.0 compiler on a Power Macintosh 7500 computer. The software is designed to run on a cluster of power macs running system 7.1 or greater on an ethernet network. Currently, because of limitations of both the operating system and the Metrowerks compiler, the software is forced to make use of slower, high level communication interfaces. Both the operating system and the compiler software are under revision however, and these revisions will increase the performance of the system as a whole.

  18. Visualization of supercomputer simulations in physics

    NASA Technical Reports Server (NTRS)

    Watson, Val; Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Walatka, Pamela P.

    1989-01-01

    A description is given of the hardware and software tools and techniques in use at NASA's Numerical Aerodynamic Simulation Facility for visualization of computational fluid dynamics. The hardware consists of high-performance graphics workstations connected to the supercomputer with high-bandwidth lines, a frame buffer connected to the supercomputer with UltraNet, a digital video recording system, and film recorders. The software permits the scientist to view the three-dimensional scenes dynamically, to zoom into a region of interest, and to rotate his viewing position to study any region of interest in more detail. The software also provides automated animation and video recording of the scenes. The digital effects unit on the video system facilitates comparison of computer simulations with flight or wind tunnel experiments.

  19. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  20. Adventures in Supercomputing: An innovative program

    SciTech Connect

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  1. Travel from a supercomputer to killer micros

    SciTech Connect

    Werner, N.E.

    1991-03-01

    I describe my effort to convert a Fortran application that runs on a parallel supercomputer (Cray Y/MP) to run on a set of BBN TC2000 killer micros. I used both shared memory parallel processing options available at MPCI for the BBN TC2000, the Parallel Fortran Preprocessor (PFP) and the Uniform System extended Fortran compiler (US). I describe how I used the BBN Xtra programming tools for analysis and debugging during this conversion process. My ultimate goal for this hands on experiment was to gain insight into the type of tools that might be helpful for porting existing programs from a supercomputer environment to a killer micro environment. 5 refs., 9 figs.

  2. A Layered Solution for Supercomputing Storage

    SciTech Connect

    Grider, Gary

    2016-11-16

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  3. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2016-11-30

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  4. Mantle convection on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Weismüller, Jens; Gmeiner, Björn; Mohr, Marcus; Waluga, Christian; Wohlmuth, Barbara; Rüde, Ulrich; Bunge, Hans-Peter

    2015-04-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures demand an interdisciplinary co-design. Here we report about recent advances of the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups in computer sciences, mathematics and geophysical application under the leadership of FAU Erlangen. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection assessing the impact of small scale processes on global mantle flow.

  5. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on

  6. Data-intensive computing on numerically-insensitive supercomputers

    SciTech Connect

    Ahrens, James P; Fasel, Patricia K; Habib, Salman; Heitmann, Katrin; Lo, Li - Ta; Patchett, John M; Williams, Sean J; Woodring, Jonathan L; Wu, Joshua; Hsu, Chung - Hsing

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  7. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  8. Spatiotemporal modeling of node temperatures in supercomputers

    SciTech Connect

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; Ticknor, Lawrence O.; Bonnie, Amanda Marie; Montoya, Andrew J.; Michalak, Sarah E.

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this and other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.

  9. Compositional reservoir simulation in parallel supercomputing environments

    SciTech Connect

    Briens, F.J.L. ); Wu, C.H. ); Gazdag, J.; Wang, H.H. )

    1991-09-01

    A large-scale compositional reservoir simulation ({gt}1,000 cells) is not often run on a conventional mainframe computer owing to excessive turnaround times. This paper presents programming and computational techniques that fully exploit the capabilities of parallel supercomputers for a large-scale compositional simulation. A novel algorithm called sequential staging of tasks (SST) that can take full advantage of parallel-vector processing to speed up the solution of a large linear system is introduced. The effectiveness of SST is illustrated with results from computer experiments conducted on an IBM 3090-600E.

  10. Towards 21st century stellar models: Star clusters, supercomputing and asteroseismology

    NASA Astrophysics Data System (ADS)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.; Meakin, C.; Stello, D.; Christensen-Dalsgaard, J.; Kuehn, C.; De Silva, G. M.; Arnett, W. D.; Lattanzio, J. C.; MacLean, B. T.

    2016-09-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy - through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys - are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling. This study uses observational data from from HST, VLT, AAT, Kepler, and supercomputing resources in Australia provided by the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre.

  11. Good Seeing: Best Practices for Sustainable Operations at the Air Force Maui Optical and Supercomputing Site

    DTIC Science & Technology

    2016-01-01

    Solar Telescope (ATST) personnel, telephone and e-mail interviews, June 18, 2013. Atacama Large Millimeter/Submillimeter Array (ALMA) personnel...class) optical telescopes that were designed to identify and track ballistic missile tests and orbiting human-made objects. Together, these telescopes...Force Space Command AMOS Air Force Maui Optical and Supercomputing Site ALMA Atacama Large Millimeter Array AO adaptive optics ATST Advanced

  12. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    SciTech Connect

    Not Available

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  13. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  14. A network architecture for Petaflops supercomputers.

    SciTech Connect

    DeBenedictis, Erik P.

    2003-09-01

    If we are to build a supercomputer with a speed of 10{sup 15} floating operations per second (1 PetaFLOPS), interconnect technology will need to be improved considerably over what it is today. In this report, we explore one possible interconnect design for such a network. The guiding principle in this design is the optimization of all components for the finiteness of the speed of light. To achieve a linear speedup in time over well-tested supercomputers of todays' designs will require scaling up of processor power and bandwidth and scaling down of latency. Latency scaling is the most challenging: it requires a 100 ns user-to-user latency for messages traveling the full diameter of the machine. To meet this constraint requires simultaneously minimizing wire length through 3D packaging, new low-latency electrical signaling mechanisms, extremely fast routers, and new network interfaces. In this report, we outline approaches and implementations that will meet the requirements when implemented as a system. No technology breakthroughs are required.

  15. Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)

    SciTech Connect

    Not Available

    2011-02-01

    Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.

  16. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  17. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  18. Stochastic simulation of electron avalanches on supercomputers

    SciTech Connect

    Rogasinsky, S. V.; Marchenko, M. A.

    2014-12-09

    In the paper, we present a three-dimensional parallel Monte Carlo algorithm named ELSHOW which is developed for simulation of electron avalanches in gases. Parallel implementation of the ELSHOW was made on supercomputers with different architectures (massive parallel and hybrid ones). Using the ELSHOW, calculations of such integral characteristics as the number of particles in an avalanche, the coefficient of impact ionization, the drift velocity, and the others were made. Also, special precise computations were made to select an appropriate size of the time step using the technique of dependent statistical tests. Particularly, the algorithm consists of special methods of distribution modeling, a lexicographic implementation scheme for “branching” of trajectories, justified estimation of functionals. A comparison of the obtained results for nitrogen with previously published theoretical and experimental data was made.

  19. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    SciTech Connect

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold Edward; Stevenson, Joel O.; Benner, Robert E., Jr.; Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  20. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    SciTech Connect

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  1. Post-remedial-action radiological survey of the Westinghouse Advanced Reactors Division Plutonium Fuel Laboratories, Cheswick, Pennsylvania, October 1-8, 1981

    SciTech Connect

    Flynn, K.F.; Justus, A.L.; Sholeen, C.M.; Smith, W.H.; Wynveen, R.A.

    1984-01-01

    The post-remedial-action radiological assessment conducted by the ANL Radiological Survey Group in October 1981, following decommissioning and decontamination efforts by Westinghouse personnel, indicated that except for the Advanced Fuels Laboratory exhaust ductwork and north wall, the interior surfaces of the Plutonium Laboratory and associated areas within Building 7 and the Advanced Fuels Laboratory within Building 8 were below both the ANSI Draft Standard N13.12 and NRC Guideline criteria for acceptable surface contamination levels. Hence, with the exceptions noted above, the interior surfaces of those areas within Buildings 7 and 8 that were included in the assessment are suitable for unrestricted use. Air samples collected at the involved areas within Buildings 7 and 8 indicated that the radon, thoron, and progeny concentrations within the air were well below the limits prescribed by the US Surgeon General, the Environmental Protection Agency, and the Department of Energy. The Building 7 drain lines are contaminated with uranium, plutonium, and americium. Radiochemical analysis of water and dirt/sludge samples collected from accessible Low-Bay, High-Bay, Shower Room, and Sodium laboratory drains revealed uranium, plutonium, and americium contaminants. The Building 7 drain lines hence are unsuitable for release for unrestricted use in their present condition. Low levels of enriched uranium, plutonium, and americium were detected in an environmental soil coring near Building 8, indicating release or spillage due to Advanced Reactors Division activities or Nuclear Fuel Division activities undr NRC licensure. /sup 60/Co contamination was detected within the Building 7 Shower Room and in soil corings from the environs of Building 7. All other radionuclide concentrations measured in soil corings and the storm sewer outfall sample collected from the environs about Buildings 7 and 8 were within the range of normally expected background concentrations.

  2. Developing Fortran Code for Kriging on the Stampede Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  3. Towards Efficient Supercomputing: Searching for the Right Efficiency Metric

    SciTech Connect

    Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W

    2012-01-01

    The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has been and how it stands today with respect to efficient supercomputing.

  4. UNIX security in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1989-01-01

    The author critiques some security mechanisms in most versions of the Unix operating system and suggests more effective tools that either have working prototypes or have been implemented, for example in secure Unix systems. Although no computer (not even a secure one) is impenetrable, breaking into systems with these alternate mechanisms will cost more, require more skill, and be more easily detected than penetrations of systems without these mechanisms. The mechanisms described fall into four classes (with considerable overlap). User authentication at the local host affirms the identity of the person using the computer. The principle of least privilege dictates that properly authenticated users should have rights precisely sufficient to perform their tasks, and system administration functions should be compartmentalized; to this end, access control lists or capabilities should either replace or augment the default Unix protection system, and mandatory access controls implementing multilevel security models and integrity mechanisms should be available. Since most users access supercomputing environments using networks, the third class of mechanisms augments authentication (where feasible). As no security is perfect, the fourth class of mechanism logs events that may indicate possible security violations; this will allow the reconstruction of a successful penetration (if discovered), or possibly the detection of an attempted penetration.

  5. Advances in the discovery of novel antimicrobials targeting the assembly of bacterial cell division protein FtsZ.

    PubMed

    Li, Xin; Ma, Shutao

    2015-05-05

    Currently, wide-spread antimicrobials resistance among bacterial pathogens continues being a dramatically increasing and serious threat to public health, and thus there is a pressing need to develop new antimicrobials to keep pace with the bacterial resistance. Filamentous temperature-sensitive protein Z (FtsZ), a prokaryotic cytoskeleton protein, plays an important role in bacterial cell division. It as a very new and promising target, garners special attention in the antibacterial research in the recent years. This review describes not only the function and dynamic behaviors of FtsZ, but also the known natural and synthetic inhibitors of FtsZ. In particular, the small molecules recently developed and the future directions of ideal candidates are highlighted.

  6. An orthogonal wavelet division multiple-access processor architecture for LTE-advanced wireless/radio-over-fiber systems over heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Mahapatra, Chinmaya; Leung, Victor CM; Stouraitis, Thanos

    2014-12-01

    The increase in internet traffic, number of users, and availability of mobile devices poses a challenge to wireless technologies. In long-term evolution (LTE) advanced system, heterogeneous networks (HetNet) using centralized coordinated multipoint (CoMP) transmitting radio over optical fibers (LTE A-ROF) have provided a feasible way of satisfying user demands. In this paper, an orthogonal wavelet division multiple-access (OWDMA) processor architecture is proposed, which is shown to be better suited to LTE advanced systems as compared to orthogonal frequency division multiple access (OFDMA) as in LTE systems 3GPP rel.8 (3GPP, http://www.3gpp.org/DynaReport/36300.htm). ROF systems are a viable alternative to satisfy large data demands; hence, the performance in ROF systems is also evaluated. To validate the architecture, the circuit is designed and synthesized on a Xilinx vertex-6 field-programmable gate array (FPGA). The synthesis results show that the circuit performs with a clock period as short as 7.036 ns (i.e., a maximum clock frequency of 142.13 MHz) for transform size of 512. A pipelined version of the architecture reduces the power consumption by approximately 89%. We compare our architecture with similar available architectures for resource utilization and timing and provide performance comparison with OFDMA systems for various quality metrics of communication systems. The OWDMA architecture is found to perform better than OFDMA for bit error rate (BER) performance versus signal-to-noise ratio (SNR) in wireless channel as well as ROF media. It also gives higher throughput and mitigates the bad effect of peak-to-average-power ratio (PAPR).

  7. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  8. Dense LU Factorization on Multicore Supercomputer Nodes

    SciTech Connect

    Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V

    2012-01-01

    Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block

  9. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    SciTech Connect

    Geveci, Berk; Fabian, Nathan; Marion, Patrick; Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  10. An integrated distributed processing interface for supercomputers and workstations

    SciTech Connect

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  11. Large-Scale Graph Processing Analysis using Supercomputer Cluster

    NASA Astrophysics Data System (ADS)

    Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih

    2017-01-01

    Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.

  12. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    SciTech Connect

    Berkbigler, K. P.; Bush, B. W.; Davis, Kei,; Hoisie, A.; Smith, S. A.

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  13. Two wavelength division multiplexing WAN trials

    SciTech Connect

    Lennon, W.J.; Thombley, R.L.

    1995-01-20

    Lawrence Livermore National Laboratory, as a super-user, supercomputer, and super-application site, is anticipating the future bandwidth and protocol requirements necessary to connect to other such sites as well as to connect to remote-sited control centers and experiments. In this paper the authors discuss their vision of the future of Wide Area Networking, describe the plans for a wavelength division multiplexed link connecting Livermore with the University of California at Berkeley and describe plans for a transparent, {approx} 10 Gb/s ring around San Francisco Bay.

  14. When supercomputers go over to the dark side

    NASA Astrophysics Data System (ADS)

    White, Martin; Scott, Pat

    2017-03-01

    Despite oodles of data and plenty of theories, we still don't know what dark matter is. Martin White and Pat Scott describe how a new software tool called GAMBIT – run on supercomputers such as Prometheus – will test how novel theories stack up when confronted with real data.

  15. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    ScienceCinema

    Zgurskaya, Helen; Smith, Jeremy

    2016-11-23

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  16. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    SciTech Connect

    Zgurskaya, Helen; Smith, Jeremy

    2016-11-17

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  17. Recent results from the Swinburne supercomputer software correlator

    NASA Astrophysics Data System (ADS)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  18. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    SciTech Connect

    De, K; Jha, S; Maeno, T; Mashinistov, R.; Nilsson, P; Novikov, A.; Oleynik, D; Panitkin, S; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tsulaia, V.; Velikhov, V.; Wen, G.; Wells, Jack C; Wenaus, T

    2016-01-01

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of

  19. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  20. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  1. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    NASA Astrophysics Data System (ADS)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  2. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  3. New Mexico High School supercomputer challenge

    SciTech Connect

    Cohen, M.; Foster, M.; Kratzer, D.; Malone, P.; Solem, A.

    1991-01-01

    The national need for well trained scientists and engineers is more urgent today than ever before. Scientists who are trained in advanced computational techniques and have experience with multidisciplinary scientific collaboration are needed for both research and commercial applications if the United States is to maintain its productivity and technical edge in the world market. Many capable high school students, however, lose interest in pursuing scientific academic subjects or in considering science or engineering as a possible career. An academic contest that progresses from a state-sponsored program to a national competition is a way of developing science and computing knowledge among high school students and teachers as well as instilling enthusiasm for science. This paper describes an academic-year long program for high school students in New Mexico. The unique features, method, and evaluation of the program are discussed.

  4. Finite element methods on supercomputers - The scatter-problem

    NASA Technical Reports Server (NTRS)

    Loehner, R.; Morgan, K.

    1985-01-01

    Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.

  5. Study of ATLAS TRT performance with GRID and supercomputers

    NASA Astrophysics Data System (ADS)

    Krasnopevtsev, D. V.; Klimentov, A. A.; Mashinistov, R. Yu.; Belyaev, N. L.; Ryabinkin, E. A.

    2016-09-01

    One of the most important studies dedicated to be solved for ATLAS physical analysis is a reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. Paper includes Transition Radiation Tracker performance results obtained with the usage of the ATLAS GRID and Kurchatov Institute's Data Processing Center including Tier-1 grid site and supercomputer as well as analysis of CPU efficiency during these studies.

  6. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  7. A color graphics environment in support of supercomputer systems

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, R.

    1985-01-01

    An initial step in the integration of an upgrade of a VPS-32 supercomputer to 16 million 64-bit words, to be closely followed by a further upgrade to 32 million words, was to develop a graphics language commonality with other computers at the Langley Center. The power of the upgraded supercomputer is to users at individual workstations, who will aid in defining the direction for future expansions in both graphics software and workstation requirements for the supercomputers. The LAN used is an ETHERNET configuration featuring both CYBER mainframe and PDP 11/34 image generator computers. The system includes a film recorder for image production in slide, CRT, 16 mm film, 35 mm film or polaroid film images. The workstations have screen resolutions of 1024 x 1024 with each pixel being one of 256 colors selected from a palette of 16 million colors. Each screen can have up to 8 windows open at a time, and is driven by a MC68000 microprocessor drawing on 4.5 Mb RAM, a 40 Mb hard disk and two floppy drives. Input is from a keyboard, digitizer pad, joystick or light pen. The system now allows researchers to view computed results in video time before printing out selected data.

  8. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  9. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier

  10. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  11. Structures Division

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1995 are presented.

  12. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  13. New Mexico Supercomputing Challenge 1993 evaluation report. Progress report

    SciTech Connect

    Trainor, M.; Eker, P.; Kratzer, D.; Foster, M.; Anderson, M.

    1993-11-01

    This report provides the evaluation of the third year (1993) of the New Mexico High School Supercomputing Challenge. It includes data to determine whether we met the program objectives, measures participation, and compares progress from the first to the third years. This year`s report is a more complete assessment than last year`s, providing both formative and summative evaluation data. Data indicates that the 1993 Challenge significantly changed many students` career plans and attitudes toward science, provided professional development for teachers, and caused some changes in computer offerings in several participating schools.

  14. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  15. Supercomputer predictive modeling for ensuring space flight safety

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Smirnov, N. N.; Nikitin, V. F.

    2015-04-01

    Development of new types of rocket engines, as well as upgrading the existing engines needs computer aided design and mathematical tools for supercomputer modeling of all basic processes of mixing, ignition, combustion and outflow through the nozzle. Even small upgrades and changes introduced in existing rocket engines without proper simulations cause severe accidents at launch places witnessed recently. The paper presents the results of computer code developing, verification and validation, making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  16. Chemical Technology Division annual technical report 1997

    SciTech Connect

    1998-06-01

    The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials and electrified interfaces. In addition, the Division operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division`s activities during 1997 are presented.

  17. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  18. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  19. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  20. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  1. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  2. Modeling the weather with a data flow supercomputer

    NASA Technical Reports Server (NTRS)

    Dennis, J. B.; Gao, G.-R.; Todd, K. W.

    1984-01-01

    A static concept of data flow architecture is considered for a supercomputer for weather modeling. The machine level instructions are loaded into specific memory locations before computation is initiated, with only one instruction active at a time. The machine would have processing element, functional unit, array memory, memory routing and distribution routing network elements all contained on microprocessors. A value-oriented algorithmic language (VAL) would be employed and would have, as basic operations, simple functions deriving results from operand values. Details of the machine language format, computations with an array and file processing procedures are outlined. A global weather model is discussed in terms of a static architecture and the potential computation rate is analyzed. The results indicate that detailed design studies are warranted to quantify costs and parts fabrication requirements.

  3. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  4. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  5. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  6. An analysis of file migration in a UNIX supercomputing environment

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1992-01-01

    The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.

  7. Rekindle the Fire: Building Supercomputers to Solve Dynamic Problems

    SciTech Connect

    Studham, Scott S. )

    2004-02-16

    Seymour Cray had a Lets go to the moon attitude when it came to building high-performance computers. His drive was to create architectures designed to solve the most challenging problems. Modern high-performance computer architects, however, seem to be focusing on building the largest floating-point-generation machines by using truckloads of commodity parts. Don't get me wrong; current clusters can solve a class of problems that are untouchable by any other system in the world, including the supercomputers of yesteryear. Many of the worlds fastest clusters provide new insights into weather forecasting, our understanding of fundamental sciences and provide the ability to model our nuclear stockpiles. Lets call this class of problem a first-principles simulation because the simulations are based on a fundamental physical understanding or model.

  8. Solving global shallow water equations on heterogeneous supercomputers.

    PubMed

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved.

  9. Solving global shallow water equations on heterogeneous supercomputers

    PubMed Central

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428

  10. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  11. 1998 Chemical Technology Division Annual Technical Report.

    SciTech Connect

    Ackerman, J.P.; Einziger, R.E.; Gay, E.C.; Green, D.W.; Miller, J.F.

    1999-08-06

    The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials. In addition, the Division operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division's activities during 1998 are presented.

  12. Accelerator Technology Division

    NASA Astrophysics Data System (ADS)

    1992-04-01

    In fiscal year (FY) 1991, the Accelerator Technology (AT) division continued fulfilling its mission to pursue accelerator science and technology and to develop new accelerator concepts for application to research, defense, energy, industry, and other areas of national interest. This report discusses the following programs: The Ground Test Accelerator Program; APLE Free-Electron Laser Program; Accelerator Transmutation of Waste; JAERI, OMEGA Project, and Intense Neutron Source for Materials Testing; Advanced Free-Electron Laser Initiative; Superconducting Super Collider; The High-Power Microwave Program; (Phi) Factory Collaboration; Neutral Particle Beam Power System Highlights; Accelerator Physics and Special Projects; Magnetic Optics and Beam Diagnostics; Accelerator Design and Engineering; Radio-Frequency Technology; Free-Electron Laser Technology; Accelerator Controls and Automation; Very High-Power Microwave Sources and Effects; and GTA Installation, Commissioning, and Operations.

  13. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  14. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  15. Supercomputers Join the Fight against Cancer – U.S. Department of Energy

    SciTech Connect

    2016-06-29

    The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.

  16. Requirements for supercomputing in energy research: The transition to massively parallel computing

    SciTech Connect

    Not Available

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  17. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  18. Maintenance and Upgrading of the Richmond Physics Supercomputing Cluster

    NASA Astrophysics Data System (ADS)

    Davda, Vikash

    2003-10-01

    The supercomputing cluster in Physics has been upgraded. It supports nuclear physics research at Jefferson Lab, which focuses on probing the quark-gluon structure of atomic nuclei. We added new slave nodes, increased storage, raised a firewall, and documented the e-mail archive relating to the cluster. The three new slave nodes were physically mounted and configured to join the cluster. A RAID for extra storage was moved from a prototype cluster and configured for this cluster. A firewall was implemented to enhance security using a separate node from the prototype cluster. The software Firewall Builder was used to set communication rules. Documentation consists primarily of e-mails exchanged with the vendor. We wanted web-based, searchable documentation. We used SWISH-E, non-proprietary indexing software designed to search through file collections such as e-mails. SWISH-E works by first creating an index. A built-in module then sets up a Perl interface for the user to define the search; the files in the index are then sorted.

  19. High Resolution Aerospace Applications using the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha

    2005-01-01

    This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

  20. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  1. Adventures in supercomputing: An innovative program for high school teachers

    SciTech Connect

    Oliver, C.E.; Hicks, H.R.; Summers, B.G.; Staten, D.G.

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  2. ASC Supercomputers Predict Effects of Aging on Materials

    SciTech Connect

    Kubota, A; Reisman, D B; Wolfer, W G

    2005-08-25

    In an extensive molecular dynamics (MD) study of shock compression of aluminum containing such microscopic defects as found in aged plutonium, LLNL scientists have demonstrated that ASC supercomputers live up to their promise as powerful tools to predict aging phenomena in the nuclear stockpile. Although these MD investigations are carried out on material samples containing only about 10 to 40 million atoms, and being not much bigger than a virus particle, they have shown that reliable materials properties and relationships between them can be extracted for density, temperature, pressure, and dynamic strength. This was proven by comparing their predictions with experimental data of the Hugoniot, with dynamic strength inferred from gas-gun experiments, and with the temperatures behind the shock front as calculated with hydro-codes. The effects of microscopic helium bubbles and of radiation-induced dislocation loops and voids on the equation of state were also determined and found to be small and in agreement with earlier theoretical predictions and recent diamond-anvil-cell experiments. However, these microscopic defects play an essential role in correctly predicting the dynamic strength for these nano-crystalline samples. These simulations also prove that the physics involved in shock compression experiments remains the same for macroscopic specimens used in gas-gun experiments down to micrometer samples to be employed in future NIF experiments. Furthermore, a practical way was discovered to reduce plastic instabilities in NIF target materials by introducing finely dispersed defects.

  3. Supercomputing for the parallelization of whole genome analysis

    PubMed Central

    Puckelwartz, Megan J.; Pesce, Lorenzo L.; Nelakuditi, Viswateja; Dellefave-Castillo, Lisa; Golbus, Jessica R.; Day, Sharlene M.; Cappola, Thomas P.; Dorn, Gerald W.; Foster, Ian T.; McNally, Elizabeth M.

    2014-01-01

    Motivation: The declining cost of generating DNA sequence is promoting an increase in whole genome sequencing, especially as applied to the human genome. Whole genome analysis requires the alignment and comparison of raw sequence data, and results in a computational bottleneck because of limited ability to analyze multiple genomes simultaneously. Results: We now adapted a Cray XE6 supercomputer to achieve the parallelization required for concurrent multiple genome analysis. This approach not only markedly speeds computational time but also results in increased usable sequence per genome. Relying on publically available software, the Cray XE6 has the capacity to align and call variants on 240 whole genomes in ∼50 h. Multisample variant calling is also accelerated. Availability and implementation: The MegaSeq workflow is designed to harness the size and memory of the Cray XE6, housed at Argonne National Laboratory, for whole genome analysis in a platform designed to better match current and emerging sequencing volume. Contact: emcnally@uchicago.edu PMID:24526712

  4. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    PubMed Central

    Asif, Rameez

    2016-01-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to −11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC). PMID:27270381

  5. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    NASA Astrophysics Data System (ADS)

    Asif, Rameez

    2016-06-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to ‑11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC).

  6. Implementation of orthogonal frequency division multiplexing (OFDM) and advanced signal processing for elastic optical networking in accordance with networking and transmission constraints

    NASA Astrophysics Data System (ADS)

    Johnson, Stanley

    An increasing adoption of digital signal processing (DSP) in optical fiber telecommunication has brought to the fore several interesting DSP enabled modulation formats. One such format is orthogonal frequency division multiplexing (OFDM), which has seen great success in wireless and wired RF applications, and is being actively investigated by several research groups for use in optical fiber telecom. In this dissertation, I present three implementations of OFDM for elastic optical networking and distributed network control. The first is a field programmable gate array (FPGA) based real-time implementation of a version of OFDM conventionally known as intensity modulation and direct detection (IMDD) OFDM. I experimentally demonstrate the ability of this transmission system to dynamically adjust bandwidth and modulation format to meet networking constraints in an automated manner. To the best of my knowledge, this is the first real-time software defined networking (SDN) based control of an OFDM system. In the second OFDM implementation, I experimentally demonstrate a novel OFDM transmission scheme that supports both direct detection and coherent detection receivers simultaneously using the same OFDM transmitter. This interchangeable receiver solution enables a trade-off between bit rate and equipment cost in network deployment and upgrades. I show that the proposed transmission scheme can provide a receiver sensitivity improvement of up to 1.73 dB as compared to IMDD OFDM. I also present two novel polarization analyzer based detection schemes, and study their performance using experiment and simulation. In the third implementation, I present an OFDM pilot-tone based scheme for distributed network control. The first instance of an SDN-based OFDM elastic optical network with pilot-tone assisted distributed control is demonstrated. An improvement in spectral efficiency and a fast reconfiguration time of 30 ms have been achieved in this experiment. Finally, I

  7. Chemical Sciences Division annual report 1994

    SciTech Connect

    1995-06-01

    The division is one of ten LBL research divisions. It is composed of individual research groups organized into 5 scientific areas: chemical physics, inorganic/organometallic chemistry, actinide chemistry, atomic physics, and chemical engineering. Studies include structure and reactivity of critical reaction intermediates, transients and dynamics of elementary chemical reactions, and heterogeneous and homogeneous catalysis. Work for others included studies of superconducting properties of high-{Tc} oxides. In FY 1994, the division neared completion of two end-stations and a beamline for the Advanced Light Source, which will be used for combustion and other studies. This document presents summaries of the studies.

  8. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  9. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  10. Formative cell divisions: principal determinants of plant morphogenesis.

    PubMed

    Smolarkiewicz, Michalina; Dhonukshe, Pankaj

    2013-03-01

    Formative cell divisions utilizing precise rotations of cell division planes generate and spatially place asymmetric daughters to produce different cell layers. Therefore, by shaping tissues and organs, formative cell divisions dictate multicellular morphogenesis. In animal formative cell divisions, the orientation of the mitotic spindle and cell division planes relies on intrinsic and extrinsic cortical polarity cues. Plants lack known key players from animals, and cell division planes are determined prior to the mitotic spindle stage. Therefore, it appears that plants have evolved specialized mechanisms to execute formative cell divisions. Despite their profound influence on plant architecture, molecular players and cellular mechanisms regulating formative divisions in plants are not well understood. This is because formative cell divisions in plants have been difficult to track owing to their submerged positions and imprecise timings of occurrence. However, by identifying a spatiotemporally inducible cell division plane switch system applicable for advanced microscopy techniques, recent studies have begun to uncover molecular modules and mechanisms for formative cell divisions. The identified molecular modules comprise developmentally triggered transcriptional cascades feeding onto microtubule regulators that now allow dissection of the hierarchy of the events at better spatiotemporal resolutions. Here, we survey the current advances in understanding of formative cell divisions in plants in the context of embryogenesis, stem cell functionality and post-embryonic organ formation.

  11. Division Chief Meeting, April, 1929

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Caption: 'LMAL division chiefs confer with the engineer-in-charge in April 1929. Left to right: E.A. Myers, Personnel Division; Edward R. Sharp, Property and Clerical Division; Thomas Carroll, Flight Test Division; Henry J.E. Reid, engineer in chief; Carlton Kemper, Power Plants Division; Elton Miller, aerodynamics division.'

  12. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  13. Physics division annual report 2000.

    SciTech Connect

    Thayer, K., ed.

    2001-10-04

    impacts the structure of nuclei and extended the exquisite sensitivity of the Atom-Trap-Trace-Analysis technique to new species and applications. All of this progress was built on advances in nuclear theory, which the Division pursues at the quark, hadron, and nuclear collective degrees of freedom levels. These are just a few of the highlights in the Division's research program. The results reflect the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  14. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms

  15. Data mining method for anomaly detection in the supercomputer task flow

    NASA Astrophysics Data System (ADS)

    Voevodin, Vadim; Voevodin, Vladimir; Shaikhislamov, Denis; Nikitenko, Dmitry

    2016-10-01

    The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer's task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes - normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the "Lomonosov" supercomputer.

  16. Division: The Sleeping Dragon

    ERIC Educational Resources Information Center

    Watson, Anne

    2012-01-01

    Of the four mathematical operators, division seems to not sit easily for many learners. Division is often described as "the odd one out". Pupils develop coping strategies that enable them to "get away with it". So, problems, misunderstandings, and misconceptions go unresolved perhaps for a lifetime. Why is this? Is it a case of "out of sight out…

  17. Transitional Division Algorithms.

    ERIC Educational Resources Information Center

    Laing, Robert A.; Meyer, Ruth Ann

    1982-01-01

    A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…

  18. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    SciTech Connect

    De, K; Jha, S; Klimentov, A; Maeno, T; Nilsson, P; Oleynik, D; Panitkin, S; Wells, Jack C; Wenaus, T

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  19. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.

  20. Chemical Technology Division annual technical report, 1996

    SciTech Connect

    1997-06-01

    CMT is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. It conducts R&D in 3 general areas: development of advanced power sources for stationary and transportation applications and for consumer electronics, management of high-level and low-level nuclear wastes and hazardous wastes, and electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, materials chemistry of electrified interfaces and molecular sieves, and the theory of materials properties. It also operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at ANL and other organizations. Technical highlights of the Division`s activities during 1996 are presented.

  1. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  2. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  3. A report documenting the completion of the Los Alamos National Laboratory portion of the ASC level II milestone ""Visualization on the supercomputing platform

    SciTech Connect

    Ahrens, James P; Patchett, John M; Lo, Li - Ta; Mitchell, Christopher; Mr Marle, David; Brownlee, Carson

    2011-01-24

    This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider using CPU

  4. Physics Division activities report, 1986--1987

    SciTech Connect

    Not Available

    1987-01-01

    This report summarizes the research activities of the Physics Division for the years 1986 and 1987. Areas of research discussed in this paper are: research on e/sup +/e/sup /minus// interactions; research on p/bar p/ interactions; experiment at TRIUMF; double beta decay; high energy astrophysics; interdisciplinary research; and advanced technology development and the SSC.

  5. Division Iv: Stars

    NASA Astrophysics Data System (ADS)

    Corbally, Christopher; D'Antona, Francesca; Spite, Monique; Asplund, Martin; Charbonnel, Corinne; Docobo, Jose Angel; Gray, Richard O.; Piskunov, Nikolai E.

    2012-04-01

    This Division IV was started on a trial basis at the General Assembly in The Hague 1994 and was formally accepted at the Kyoto General Assembly in 1997. Its broad coverage of ``Stars'' is reflected in its relatively large number of Commissions and so of members (1266 in late 2011). Its kindred Division V, ``Variable Stars'', has the same history of its beginning. The thinking at the time was to achieve some kind of balance between the number of members in each of the 12 Divisions. Amid the current discussion of reorganizing the number of Divisions into a more compact form it seems advisable to make this numerical balance less of an issue than the rationalization of the scientific coverage of each Division, so providing more effective interaction within a particular field of astronomy. After all, every star is variable to a certain degree and such variability is becoming an ever more powerful tool to understand the characteristics of every kind of normal and peculiar star. So we may expect, after hearing the reactions of members, that in the restructuring a single Division will result from the current Divisions IV and V.

  6. High energy physics division semiannual report of research activities

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R. )

    1991-08-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1991--June 30, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  7. NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)

    ScienceCinema

    None

    2016-07-12

    NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.

  8. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  9. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  10. The impact of the U.S. supercomputing initiative will be global

    SciTech Connect

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  11. Oriented divisions, fate decisions

    PubMed Central

    Williams, Scott E.; Fuchs, Elaine

    2013-01-01

    During development, the establishment of proper tissue architecture depends upon the coordinated control of cell divisions not only in space and time, but also direction. Execution of an oriented cell division requires establishment of an axis of polarity and alignment of the mitotic spindle along this axis. Frequently, the cleavage plane also segregates fate determinants, either unequally or equally between daughter cells, the outcome of which is either an asymmetric or symmetric division, respectively. The last few years have witnessed tremendous growth in understanding both the extrinsic and intrinsic cues that position the mitotic spindle, the varied mechanisms in which the spindle orientation machinery is controlled in diverse organisms and organ systems, and the manner in which the division axis influences the signaling pathways that direct cell fate choices. PMID:24021274

  12. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    1999-01-01

    The Structures and Acoustics Division of NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported are a synopsis of the work and accomplishments reported by the Division during the 1996 calendar year. A bibliography containing 42 citations is provided.

  13. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    2001-01-01

    The Structures and Acoustics Division of the NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included in this report are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported is a synopsis of the work and accomplishments completed by the Division during the 1997, 1998, and 1999 calendar years. A bibliography containing 93 citations is provided.

  14. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  15. Activities of the Solid State Division

    NASA Astrophysics Data System (ADS)

    Green, P. H.; Hinton, L. W.

    1994-08-01

    This report covers research progress in the Solid State Division from April 1, 1992, to September 30, 1993. During this period, the division conducted a broad, interdisciplinary materials research program with emphasis on theoretical solid state physics, neutron scattering, synthesis and characterization of materials, ion beam and laser processing, and the structure of solids and surfaces. This research effort was enhanced by new capabilities in atomic-scale materials characterization, new emphasis on the synthesis and processing of materials, and increased partnering with industry and universities. The theoretical effort included a broad range of analytical studies, as well as a new emphasis on numerical simulation stimulated by advances in high-performance computing and by strong interest in related division experimental programs. Superconductivity research continued to advance on a broad front from fundamental mechanisms of high-temperature superconductivity to the development of new materials and processing techniques. The Neutron Scattering Program was characterized by a strong scientific user program and growing diversity represented by new initiatives in complex fluids and residual stress. The national emphasis on materials synthesis and processing was mirrored in division research programs in thin-film processing, surface modification, and crystal growth. Research on advanced processing techniques such as laser ablation, ion implantation, and plasma processing was complemented by strong programs in the characterization of materials and surfaces including ultrahigh resolution scanning transmission electron microscopy, atomic-resolution chemical analysis, synchrotron x-ray research, and scanning tunneling microscopy.

  16. Website for the Space Science Division

    NASA Technical Reports Server (NTRS)

    Schilling, James; DeVincenzi, Donald (Technical Monitor)

    2002-01-01

    The Space Science Division at NASA Ames Research Center is dedicated to research in astrophysics, exobiology, advanced life support technologies, and planetary science. These research programs are structured around Astrobiology (the study of life in the universe and the chemical and physical forces and adaptions that influence life's origin, evolution, and destiny), and address some of the most fundamental questions pursued by science. These questions examine the origin of life and our place in the universe. Ames is recognized as a world leader in Astrobiology. In pursuing our mission in Astrobiology, Space Science Division scientists perform pioneering basic research and technology development.

  17. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  18. Division i: Fundamental Astronomy

    NASA Astrophysics Data System (ADS)

    McCarthy, Dennis D.; Klioner, Sergei A.; Vondrák, Jan; Evans, Dafydd Wyn; Hohenkerk, Catherine Y.; Hosokawa, Mizuhiko; Huang, Cheng-Li; Kaplan, George H.; Knežević, Zoran; Manchester, Richard N.; Morbidelli, Alessandro; Petit, Gérard; Schuh, Harald; Soffel, Michael H.; Zacharias, Norbert

    2012-04-01

    The goal of the division is to address the scientific issues that were developed at the 2009 IAU General Assembly in Rio de Janeiro. These are:•Astronomical constants-Gaussian gravitational constant, Astronomical Unit, GMSun, geodesic precession-nutation•Astronomical software•Solar System Ephemerides-Pulsar research-Comparison of dynamical reference frames•Future Optical Reference Frame•Future Radio Reference Frame•Exoplanets-Detection-Dynamics•Predictions of Earth orientation•Units of measurements for astronomical quantities in relativistic context•Astronomical units in the relativistic framework•Time-dependent ecliptic in the GCRS•Asteroid masses•Review of space missions•Detection of gravitational waves•VLBI on the Moon•Real time electronic access to UT1-UTCIn pursuit of these goals Division I members have made significant scientific and organizational progress, and are organizing a Joint Discussion on Space-Time Reference Systems for Future Research at the 2012 IAU General Assembly. The details of Division activities and references are provided in the individual Commission and Working Group reports in this volume. A comprehensive list of references related to the work of the Division is available at the IAU Division I website at http://maia.usno.navy.mil/iaudiv1/.

  19. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    PubMed Central

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-01-01

    Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems

  20. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  1. Multi-processing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    The MIMD concept is applied, through multitasking, with relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. An existing single processor algorithm is mapped without the need for developing a new algorithm. The procedure of designing a code utilizing this approach is automated with the Unix stream editor. A Multiple Processor Multiple Grid (MPMG) code is developed as a demonstration of this approach. This code solves the three-dimensional, Reynolds-averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. This solver is applied to a generic, oblique-wing aircraft problem on a four-processor computer using one process for data management and nonparallel computations and three processes for pseudotime advance on three different grid systems.

  2. Hurricane Modeling and Supercomputing: Can a global mesoscale model be useful in improving forecasts of tropical cyclogenesis?

    NASA Astrophysics Data System (ADS)

    Shen, B.; Tao, W.; Atlas, R.

    2007-12-01

    Hurricane modeling, along with guidance from observations, has been used to help construct hurricane theories since the 1960s. CISK (conditional instability of the second kind, Charney and Eliassen 1964; Ooyama 1964,1969) and WISHE (wind-induced surface heat exchange, Emanuel 1986) are among the well-known theories being used to understand hurricane intensification. For hurricane genesis, observations have indicated the importance of large-scale flows (e.g., the Madden-Julian Oscillation or MJO, Maloney and Hartmann, 2000) on the modulation of hurricane activity. Recent modeling studies have focused on the role of the MJO and Rossby waves (e.g., Ferreira and Schubert, 1996; Aivyer and Molinari, 2003) and/or the interaction of small-scale vortices (e.g., Holland 1995; Simpson et al. 1997; Hendrick et al. 2004), of which determinism could be also built by large-scale flows. The aforementioned studies suggest a unified view on hurricane formation, consisting of multiscale processes such as scale transition (e.g., from the MJO to Equatorial Rossby Waves and from waves to vortices), and scale interactions among vortices, convection, and surface heat and moisture fluxes. To depict the processes in the unified view, a high-resolution global model is needed. During the past several years, supercomputers have enabled the deployment of ultra-high resolution global models, obtaining remarkable forecasts of hurricane track and intensity (Atlas et al. 2005; Shen et al. 2006). In this work, hurricane genesis is investigated with the aid of a global mesoscale model on the NASA Columbia supercomputer by conducting numerical experiments on the genesis of six consecutive tropical cyclones (TCs) in May 2002. These TCs include two pairs of twin TCs in the Indian Ocean, Supertyphoon Hagibis in the West Pacific Ocean and Hurricane Alma in the East Pacific Ocean. It is found that the model is capable of predicting the genesis of five of these TCs about two to three days in advance. Our

  3. Nuclear Chemistry Division annual report FY83

    SciTech Connect

    Struble, G.

    1983-01-01

    The purpose of the annual reports of the Nuclear Chemistry Division is to provide a timely summary of research activities pursued by members of the Division during the preceding year. Throughout, details are kept to a minimum; readers desiring additional information are encouraged to read the referenced documents or contact the authors. The Introduction presents an overview of the Division's scientific and technical programs. Next is a section of short articles describing recent upgrades of the Division's major facilities, followed by sections highlighting scientific and technical advances. These are grouped under the following sections: nuclear explosives diagnostics; geochemistry and environmental sciences; safeguards technology and radiation effect; and supporting fundamental science. A brief overview introduces each section. Reports on research supported by a particular program are generally grouped together in the same section. The last section lists the scientific, administrative, and technical staff in the Division, along with visitors, consultants, and postdoctoral fellows. It also contains a list of recent publications and presentations. Some contributions to the annual report are classified and only their abstracts are included in this unclassified portion of the report (UCAR-10062-83/1); the full article appears in the classified portion (UCAR-10062-83/2).

  4. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    NASA Astrophysics Data System (ADS)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  5. Improving the Availability of Supercomputer Job Input Data Using Temporal Replication

    SciTech Connect

    Wang, Chao; Zhang, Zhe; Ma, Xiaosong; Vazhkudai, Sudharshan S; Mueller, Frank

    2009-06-01

    Storage systems in supercomputers are a major reason for service interruptions. RAID solutions alone cannot provide sufficient protection as (1) growing average disk recovery times make RAID groups increasingly vulnerable to disk failures during reconstruction, and (2) RAID does not help with higher-level faults such failed I/O nodes. This paper presents a complementary approach based on the observation that files in the supercomputer scratch space are typically accessed by batch jobs whose execution can be anticipated. Therefore, we propose to transparently, selectively, and temporarily replicate 'active' job input data by coordinating the parallel file system with the batch job scheduler. We have implemented the temporal replication scheme in the popular Lustre parallel file system and evaluated it with real-cluster experiments. Our results show that the scheme allows for fast online data reconstruction, with a reasonably low overall space and I/O bandwidth overhead.

  6. 17th Edition of TOP500 List of World's Fastest SupercomputersReseased

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack J.; Simon,Horst D.

    2001-06-21

    17th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, GERMANY; KNOXVILLE, TENN.; BERKELEY, CALIF. In what has become a much-anticipated event in the world of high-performance computing, the 17th edition of the TOP500 list of the world's fastest supercomputers was released today (June 21). The latest edition of the twice-yearly ranking finds IBM as the leader in the field, with 40 percent in terms of installed systems and 43 percent in terms of total performance of all the installed systems. In second place in terms of installed systems is Sun Microsystems with 16 percent, while Cray Inc. retained second place in terms of performance (13 percent). SGI Inc. was third both with respect to systems with 63 (12.6 percent) and performance (10.2 percent).

  7. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  8. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    SciTech Connect

    Meneses, Esteban; Ni, Xiang; Jones, Terry R; Maxwell, Don E

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  9. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  10. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    SciTech Connect

    Muller, U.A.; Baumle, B.; Kohler, P.; Gunzinger, A.; Guggenbuhl, W.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  11. TOPICAL REVIEW: Carbon-based nanotechnology on a supercomputer

    NASA Astrophysics Data System (ADS)

    Tománek, David

    2005-04-01

    The quantum nature of phenomena dominating the behaviour of nanostructures raises new challenges when trying to predict and understand the physical behaviour of these systems. Addressing this challenge is imperative in view of the continuous reduction of device sizes, which is rapidly approaching the atomic level. Since even the most advanced experimental observations are subject to being fundamentally influenced by the measurement itself, new approaches must be sought to design and test future building blocks of nanotechnology. In this respect, high-performance computing, allowing predictive large-scale computer simulations, has emerged as an indispensable tool to foresee and interpret the physical behaviour of nanostructures, thus guiding and complementing the experiment. This contribution will review some of the more intriguing phenomena associated with nanostructured carbon, including fullerenes, nanotubes and diamondoids. Due to the stability of the sp2 bond, carbon fullerenes and nanotubes are thermally and mechanically extremely stable and chemically inert. They contract rather than expand at high temperatures, and are unparalleled thermal conductors. Nanotubes may turn into ballistic electron conductors or semiconductors, and even acquire a permanent magnetic moment. In nanostructures that form during a hierarchical self-assembly process, even defects may play a different, often helpful role. sp2 bonded nanostructures may change their shape globally by a sequence of bond rotations, which turn out to be intriguing multi-step processes. At elevated temperatures, and following photo-excitations, efficient self-healing processes may repair defects, thus answering an important concern in molecular electronics.

  12. Solid State Division

    SciTech Connect

    Green, P.H.; Watson, D.M.

    1989-08-01

    This report contains brief discussions on work done in the Solid State Division of Oak Ridge National Laboratory. The topics covered are: Theoretical Solid State Physics; Neutron scattering; Physical properties of materials; The synthesis and characterization of materials; Ion beam and laser processing; and Structure of solids and surfaces. (LSP)

  13. Order Division Automated System.

    ERIC Educational Resources Information Center

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  14. | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) conducts and supports research to determine a person's risk of cancer and to find ways to reduce the risk. This knowledge is critical to making progress against cancer because risk varies over the lifespan as genetic and epigenetic changes can transform healthy tissue into invasive cancer.

  15. Tightly coupled'' simulation utilizing the EBR-II LMR: A real-time supercomputing and AI environment

    SciTech Connect

    Makowitz, H.; Barber, D.G.; Cordes, G.A.; Powers, A.K.; Scott, R. Jr.; Ward, L.W. ); Sackett, J.I.; King, R.W.; Lehto, W.K.; Lindsay, R.W.; Staffon, J.D. ); Gross, K.C. ); Doster, J.M. ); Edwards, R.M. (Pennsylvania State Univ., University P

    1990-01-01

    An integrated Supercomputing and AI environment utilizing a CRAY X-MP/216, a fiber-optic communications link, a distributed network of workstations and the Experimental Breeder Reactor II (EBR-II) Liquid Metal Reactor (LMR) and its associated instrumentation and control system is being developed at the Idaho National Engineering Laboratory (INEL). This paper summarizes various activities that make up this supercomputing and AI environment. 5 refs., 4 figs.

  16. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  17. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    SciTech Connect

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  18. Chemical Technology Division. Annual technical report, 1995

    SciTech Connect

    Laidler, J.J.; Myles, K.M.; Green, D.W.; McPheeters, C.C.

    1996-06-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1995 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) methods for treatment of hazardous waste and mixed hazardous/radioactive waste; (3) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (4) processes for separating and recovering selected elements from waste streams, concentrating low-level radioactive waste streams with advanced evaporator technology, and producing {sup 99}Mo from low-enriched uranium; (5) electrometallurgical treatment of different types of spent nuclear fuel in storage at Department of Energy sites; and (6) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems.

  19. Information Technology Division Technical Paper Abstracts 1995,

    DTIC Science & Technology

    2007-11-02

    Information Technology Division (ITD), one of the largest research and development collectives at the Naval Research Laboratory. The abstracts are organized into sections that represent the six branches with ITD: the Navy Center for Applied Research in Artificial Intelligence, Communications Systems, the Center for High Assurance Computer Systems, Transmission Technology, Advanced Information Technology , and the Center for Computational Science. Within each section, a list of branch papers published in 1993 and 1994 has also been included; abstracts

  20. Painless Division with Doc Spitler's Magic Division Estimator.

    ERIC Educational Resources Information Center

    Spitler, Gail

    1981-01-01

    An approach to teaching pupils the long division algorithm that relies heavily on a consistent and logical approach to estimation is reviewed. Once learned, the division estimator can be used to support the standard repeated subtraction algorithm. (MP)

  1. 2016 T Division Lightning Talks

    SciTech Connect

    Ramsey, Marilyn Leann; Adams, Luke Clyde; Ferre, Gregoire Robing; Grantcharov, Vesselin; Iaroshenko, Oleksandr; Krishnapriyan, Aditi; Kurtakoti, Prajvala Kishore; Le Thien, Minh Quan; Lim, Jonathan Ng; Low, Thaddeus Song En; Lystrom, Levi Aaron; Ma, Xiaoyu; Nguyen, Hong T.; Pogue, Sabine Silvia; Orandle, Zoe Ann; Reisner, Andrew Ray; Revard, Benjamin Charles; Roy, Julien; Sandor, Csanad; Slavkova, Kalina Polet; Weichman, Kathleen Joy; Wu, Fei; Yang, Yang

    2016-11-29

    These are the slides for all of the 2016 T Division lightning talks. There are 350 pages worth of slides from different presentations, all of which cover different topics within the theoretical division at Los Alamos National Laboratory (LANL).

  2. Energy Systems Divisions

    NASA Technical Reports Server (NTRS)

    Applewhite, John

    2011-01-01

    This slide presentation reviews the JSC Energy Systems Divisions work in propulsion. Specific work in LO2/CH4 propulsion, cryogenic propulsion, low thrust propulsion for Free Flyer, robotic and Extra Vehicular Activities, and work on the Morpheus terrestrial free flyer test bed is reviewed. The back-up slides contain a chart with comparisons of LO2/LCH4 with other propellants, and reviewing the advantages especially for spacecraft propulsion.

  3. Physics division annual report 2005.

    SciTech Connect

    Glover, J.; Physics

    2007-03-12

    trapped in an atom trap for the first time, a major milestone in an innovative search for the violation of time-reversal symmetry. New results from HERMES establish that strange quarks carry little of the spin of the proton and precise results have been obtained at JLAB on the changes in quark distributions in light nuclei. New theoretical results reveal that the nature of the surfaces of strange quark stars. Green's function Monte Carlo techniques have been extended to scattering problems and show great promise for the accurate calculation, from first principles, of important astrophysical reactions. Flame propagation in type 1A supernova has been simulated, a numerical process that requires considering length scales that vary by factors of eight to twelve orders of magnitude. Argonne continues to lead in the development and exploitation of the new technical concepts that will truly make an advanced exotic beam facility, in the words of NSAC, 'the world-leading facility for research in nuclear structure and nuclear astrophysics'. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for these new capabilities hold the keys to unlocking important secrets of nature. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  4. Towards a bottom-up reconstitution of bacterial cell division.

    PubMed

    Martos, Ariadna; Jiménez, Mercedes; Rivas, Germán; Schwille, Petra

    2012-12-01

    The components of the bacterial division machinery assemble to form a dynamic ring at mid-cell that drives cytokinesis. The nature of most division proteins and their assembly pathway is known. Our knowledge about the biochemical activities and protein interactions of some key division elements, including those responsible for correct ring positioning, has progressed considerably during the past decade. These developments, together with new imaging and membrane reconstitution technologies, have triggered the 'bottom-up' synthetic approach aiming at reconstructing bacterial division in the test tube, which is required to support conclusions derived from cellular and molecular analysis. Here, we describe recent advances in reconstituting Escherichia coli minimal systems able to reproduce essential functions, such as the initial steps of division (proto-ring assembly) and one of the main positioning mechanisms (Min oscillating system), and discuss future perspectives and experimental challenges.

  5. Biorepositories | Division of Cancer Prevention

    Cancer.gov

    Carefully collected and controlled high-quality human biospecimens, annotated with clinical data and properly consented for investigational use, are available through the Division of Cancer Prevention Biorepositories listed in the charts below. Biorepositories Managed by the Division of Cancer Prevention Biorepositories Supported by the Division of Cancer Prevention Related Biorepositories | Information about accessing biospecimens collected from DCP-supported clinical trials and projects.

  6. Division Quilts: A Measurement Model

    ERIC Educational Resources Information Center

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  7. The Advanced Software Development and Commercialization Project

    SciTech Connect

    Gallopoulos, E. . Center for Supercomputing Research and Development); Canfield, T.R.; Minkoff, M.; Mueller, C.; Plaskacz, E.; Weber, D.P.; Anderson, D.M.; Therios, I.U. ); Aslam, S.; Bramley, R.; Chen, H.-C.; Cybenko, G.; Gallopoulos, E.; Gao, H.; Malony, A.; Sameh, A. . Center for Supercomputing Research

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time, on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.

  8. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  9. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    SciTech Connect

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  10. Chemical Technology Division annual technical report, 1994

    SciTech Connect

    1995-06-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1994 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion; (3) methods for treatment of hazardous waste and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from waste streams, concentrating radioactive waste streams with advanced evaporator technology, and producing {sup 99}Mo from low-enriched uranium for medical applications; (6) electrometallurgical treatment of the many different types of spent nuclear fuel in storage at Department of Energy sites; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, molecular sieve structures, and impurities in scrap copper and steel; and the geochemical processes involved in mineral/fluid interfaces and water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  11. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  12. Optimizing the Point-In-Box Search Algorithm for the Cray Y-MP(TM) Supercomputer

    SciTech Connect

    Attaway, S.W.; Davis, M.E.; Heinstein, M.W.; Swegle, J.S.

    1998-12-23

    Determining the subset of points (particles) in a problem domain that are contained within certain spatial regions of interest can be one of the most time-consuming parts of some computer simulations. Examples where this 'point-in-box' search can dominate the computation time include (1) finite element contact problems; (2) molecular dynamics simulations; and (3) interactions between particles in numerical methods, such as discrete particle methods or smooth particle hydrodynamics. This paper describes methods to optimize a point-in-box search algorithm developed by Swegle that make optimal use of the architectural features of the Cray Y-MP Supercomputer.

  13. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer

    PubMed Central

    Ellingson, Sally R.; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C.

    2013-01-01

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm. PMID:24729746

  14. Structural analysis of shallow shells on the CRAY Y-MP supercomputer

    NASA Astrophysics Data System (ADS)

    Qatu, M. S.; Bataineh, A. M.

    1992-10-01

    Structural analysis of shallow shells is performed and relatively accurate displacements and stresses are obtained. An energy method, which is an extension of the Ritz method, is used in the analysis. Algebraic polynomials are used as displacement functions. The numerical problems which resulted in inaccurate stresses in previous publications are improved by making use of symmetry and performing the computations on a supercomputer which has 29-digit double-precision arithmatics. Curvature effects upon deflections and stress resultants of shallow shells with cantilever and 'semi-cantilever' boundaries are studied.

  15. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  16. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    PubMed

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  17. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  18. Development of the general interpolants method for the CYBER 200 series of supercomputers

    NASA Technical Reports Server (NTRS)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  19. Artificial cell division.

    PubMed

    Mange, Daniel; Stauffer, André; Petraglio, Enrico; Tempesti, Gianluca

    2004-01-01

    After a survey of the theory and some realizations of self-replicating machines, this paper presents a novel self-replicating loop endowed with universal construction and computation properties. Based on the hardware implementation of the so-called Tom Thumb algorithm, the design of this loop leads to a new kind of cellular automaton made of a processing and a control units. The self-replication of the Swiss flag serves as an artificial cell division example of the loop which, according to autopoietic evaluation criteria, corresponds to a cell showing the phenomenology of a living system.

  20. Scheduling Supercomputers.

    DTIC Science & Technology

    1983-02-01

    no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

  1. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP v1.0) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    NASA Astrophysics Data System (ADS)

    Gasper, F.; Goergen, K.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.; Kollet, S.

    2014-10-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing nonlinear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP v1.0) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm using the OASIS suite of external couplers, and require memory and load balancing considerations in the exchange of the coupling fields between different component models and the allocation of computational resources, respectively. Using the advanced profiling and tracing tool Scalasca to determine an optimum load balancing leads to a 19% speedup. In massively parallel supercomputer environments, the coupler OASIS-MCT is recommended, which resolves memory limitations that may be significant in case of very large computational domains and exchange fields as they occur in these specific test cases and in many applications in terrestrial research. However, model I/O and initialization in the petascale range still require major attention, as they constitute true big data challenges in light of future exascale computing resources. Based on a factor-two speedup due to compiler optimizations, a refactored coupling interface using OASIS-MCT and an optimum load balancing, the problem size in a weak scaling study can be increased by a factor of 64 from 512 to 32 768 processes while maintaining parallel efficiencies above 80% for the component models.

  2. Deconstructing Calculation Methods, Part 4: Division

    ERIC Educational Resources Information Center

    Thompson, Ian

    2008-01-01

    In the final article of a series of four, the author deconstructs the primary national strategy's approach to written division. The approach to division is divided into five stages: (1) mental division using partition; (2) short division of TU / U; (3) "expanded" method for HTU / U; (4) short division of HTU / U; and (5) long division.…

  3. ASCI Red -- Experiences and lessons learned with a massively parallel teraFLOP supercomputer

    SciTech Connect

    Christon, M.A.; Crawford, D.A.; Hertel, E.S.; Peery, J.S.; Robinson, A.C.

    1997-06-01

    The Accelerated Strategic Computing Initiative (ASCI) program involves Sandia, Los Alamos and Lawrence Livermore National Laboratories. At Sandia National Laboratories, ASCI applications include large deformation transient dynamics, shock propagation, electromechanics, and abnormal thermal environments. In order to resolve important physical phenomena in these problems, it is estimated that meshes ranging from 10{sup 6} to 10{sup 9} grid points will be required. The ASCI program is relying on the use of massively parallel supercomputers initially capable of delivering over 1 TFLOPs to perform such demanding computations. The ASCI Red machine at Sandia National Laboratories consists of over 4,500 computational nodes with a peak computational rate of 1.8 TFLOPs, 567 GBytes of memory, and 2 TBytes of disk storage. Regardless of the peak FLOP rate, there are many issues surrounding the use of massively parallel supercomputers in a production environment. These issues include parallel I/O, mesh generation, visualization, archival storage, high-bandwidth networking and the development of parallel algorithms. In order to illustrate these issues and their solution with respect to ASCI Red, demonstration calculations of time-dependent buoyancy-dominated plumes, electromechanics, and shock propagation will be presented.

  4. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  5. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    SciTech Connect

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  6. Supercomputing enabling exhaustive statistical analysis of genome wide association study data: Preliminary results.

    PubMed

    Reumann, Matthias; Makalic, Enes; Goudey, Benjamin W; Inouye, Michael; Bickerstaffe, Adrian; Bui, Minh; Park, Daniel J; Kapuscinski, Miroslaw K; Schmidt, Daniel F; Zhou, Zeyu; Qian, Guoqi; Zobel, Justin; Wagner, John; Hopper, John L

    2012-01-01

    Most published GWAS do not examine SNP interactions due to the high computational complexity of computing p-values for the interaction terms. Our aim is to utilize supercomputing resources to apply complex statistical techniques to the world's accumulating GWAS, epidemiology, survival and pathology data to uncover more information about genetic and environmental risk, biology and aetiology. We performed the Bayesian Posterior Probability test on a pseudo data set with 500,000 single nucleotide polymorphism and 100 samples as proof of principle. We carried out strong scaling simulations on 2 to 4,096 processing cores with factor 2 increments in partition size. On two processing cores, the run time is 317h, i.e. almost two weeks, compared to less than 10 minutes on 4,096 processing cores. The speedup factor is 2,020 that is very close to the theoretical value of 2,048. This work demonstrates the feasibility of performing exhaustive higher order analysis of GWAS studies using independence testing for contingency tables. We are now in a position to employ supercomputers with hundreds of thousands of threads for higher order analysis of GWAS data using complex statistics.

  7. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Technical Reports Server (NTRS)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  8. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    SciTech Connect

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby; Worley, Patrick H

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  9. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    SciTech Connect

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  10. Groundwater cooling of a supercomputer in Perth, Western Australia: hydrogeological simulations and thermal sustainability

    NASA Astrophysics Data System (ADS)

    Sheldon, Heather A.; Schaubs, Peter M.; Rachakonda, Praveen K.; Trefry, Michael G.; Reid, Lynn B.; Lester, Daniel R.; Metcalfe, Guy; Poulet, Thomas; Regenauer-Lieb, Klaus

    2015-12-01

    Groundwater cooling (GWC) is a sustainable alternative to conventional cooling technologies for supercomputers. A GWC system has been implemented for the Pawsey Supercomputing Centre in Perth, Western Australia. Groundwater is extracted from the Mullaloo Aquifer at 20.8 °C and passes through a heat exchanger before returning to the same aquifer. Hydrogeological simulations of the GWC system were used to assess its performance and sustainability. Simulations were run with cooling capacities of 0.5 or 2.5 Mega Watts thermal (MWth), with scenarios representing various combinations of pumping rate, injection temperature and hydrogeological parameter values. The simulated system generates a thermal plume in the Mullaloo Aquifer and overlying Superficial Aquifer. Thermal breakthrough (transfer of heat from injection to production wells) occurred in 2.7-4.3 years for a 2.5 MWth system. Shielding (reinjection of cool groundwater between the injection and production wells) resulted in earlier thermal breakthrough but reduced the rate of temperature increase after breakthrough, such that shielding was beneficial after approximately 5 years pumping. Increasing injection temperature was preferable to increasing flow rate for maintaining cooling capacity after thermal breakthrough. Thermal impacts on existing wells were small, with up to 10 wells experiencing a temperature increase ≥ 0.1 °C (largest increase 6 °C).

  11. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    NASA Astrophysics Data System (ADS)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  12. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  13. Understanding Microbial Divisions of Labor

    PubMed Central

    Zhang, Zheren; Claessen, Dennis; Rozen, Daniel E.

    2016-01-01

    Divisions of labor are ubiquitous in nature and can be found at nearly every level of biological organization, from the individuals of a shared society to the cells of a single multicellular organism. Many different types of microbes have also evolved a division of labor among its colony members. Here we review several examples of microbial divisions of labor, including cases from both multicellular and unicellular microbes. We first discuss evolutionary arguments, derived from kin selection, that allow divisions of labor to be maintained in the face of non-cooperative cheater cells. Next we examine the widespread natural variation within species in their expression of divisions of labor and compare this to the idea of optimal caste ratios in social insects. We highlight gaps in our understanding of microbial caste ratios and argue for a shift in emphasis from understanding the maintenance of divisions of labor, generally, to instead focusing on its specific ecological benefits for microbial genotypes and colonies. Thus, in addition to the canonical divisions of labor between, e.g., reproductive and vegetative tasks, we may also anticipate divisions of labor to evolve to reduce the costly production of secondary metabolites or secreted enzymes, ideas we consider in the context of streptomycetes. The study of microbial divisions of labor offers opportunities for new experimental and molecular insights across both well-studied and novel model systems. PMID:28066387

  14. Physics division annual report 1999

    SciTech Connect

    Thayer, K., ed.; Physics

    2000-12-06

    This report summarizes the research performed in the past year in the Argonne Physics Division. The Division's programs include operation of ATLAS as a national heavy-ion user facility, nuclear structure and reaction research with beams of heavy ions, accelerator research and development especially in superconducting radio frequency technology, nuclear theory and medium energy nuclear physics. The Division took significant strides forward in its science and its initiatives for the future in the past year. Major progress was made in developing the concept and the technology for the future advanced facility of beams of short-lived nuclei, the Rare Isotope Accelerator. The scientific program capitalized on important instrumentation initiatives with key advances in nuclear science. In 1999, the nuclear science community adopted the Argonne concept for a multi-beam superconducting linear accelerator driver as the design of choice for the next major facility in the field a Rare Isotope Accelerator (WA) as recommended by the Nuclear Science Advisory Committee's 1996 Long Range Plan. Argonne has made significant R&D progress on almost all aspects of the design concept including the fast gas catcher (to allow fast fragmentation beams to be stopped and reaccelerated) that in large part defined the RIA concept the superconducting rf technology for the driver accelerator, the multiple-charge-state concept (to permit the facility to meet the design intensity goals with existing ion-source technology), and designs and tests of high-power target concepts to effectively deal with the full beam power of the driver linac. An NSAC subcommittee recommended the Argonne concept and set as tie design goal Uranium beams of 100-kwatt power at 400 MeV/u. Argonne demonstrated that this goal can be met with an innovative, but technically in-hand, design. The heavy-ion research program focused on GammaSphere, the premier facility for nuclear structure gamma-ray studies. One example of the

  15. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    NASA Astrophysics Data System (ADS)

    Gasper, F.; Goergen, K.; Kollet, S.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.

    2014-06-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved.

  16. Physics Division annual report 2004.

    SciTech Connect

    Glover, J.

    2006-04-06

    lead in the development and exploitation of the new technical concepts that will truly make RIA, in the words of NSAC, ''the world-leading facility for research in nuclear structure and nuclear astrophysics''. The performance standards for new classes of superconducting cavities continue to increase. Driver linac transients and faults have been analyzed to understand reliability issues and failure modes. Liquid-lithium targets were shown to successfully survive the full-power deposition of a RIA beam. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for RIA holds the keys to unlocking important secrets of nature. The work described here shows how far we have come and makes it clear we know the path to meet these intellectual challenges. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  17. The Division of Household Labor.

    ERIC Educational Resources Information Center

    Spitze, Glenna D.; Huber, Joan

    A study was conducted to test the following hypotheses concerning division of household labor (DOHL) between husbands and wives: (1) the division of household labor is somewhat affected by the availability of time, especially the wife's time; (2) there are strong effects of relative power, as measured by market-related resources, marital…

  18. Lightning Talks 2015: Theoretical Division

    SciTech Connect

    Shlachter, Jack S.

    2015-11-25

    This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.

  19. Chemical Technology Division, Annual technical report, 1991

    SciTech Connect

    Not Available

    1992-03-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1991 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources; chemistry of superconducting oxides and other materials of interest with technological application; interfacial processes of importance to corrosion science, catalysis, and high-temperature superconductivity; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  20. Physics Division computer facilities

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  1. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  2. Preparing for in situ processing on upcoming leading-edge supercomputers

    DOE PAGES

    Kress, James; Churchill, Randy Michael; Klasky, Scott; ...

    2016-10-01

    High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less

  3. Preparing for in situ processing on upcoming leading-edge supercomputers

    SciTech Connect

    Kress, James; Churchill, Randy Michael; Klasky, Scott; Kim, Mark; Childs, Hank; Pugmire, David

    2016-10-01

    High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuing to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.

  4. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  5. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    SciTech Connect

    Gallarno, George; Rogers, James H; Maxwell, Don E

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  6. The Office of the Materials Division

    NASA Technical Reports Server (NTRS)

    Ramsey, amanda J.

    2004-01-01

    I was assigned to the Materials Division, which consists of the following branches; the Advanced Metallics Branch/5120-RMM, Ceramics Branch/5130-RMC, Polymers Branch/5150-RMP, and the Durability and Protective Coatings Branch/5160-RMD. Mrs. Pamela Spinosi is my assigned mentor. She was assisted by Ms.Raysa Rodriguez/5100-RM and Mrs.Denise Prestien/5100-RM, who are both employed by InDyne, Inc. My primary assignment this past summer was working directly with Ms. Rodriguez, assisting her with setting up the Integrated Financial Management Program (IFMP) 5130-RMC/Branch procedures and logs. These duties consisted of creating various spreadsheets for each individual branch member, which were updated daily. It was not hard to familiarize myself with these duties since this is my second summer working with Ms Rodriguez at NASA Glenn Research Center. RMC ordering laboratory, supplies and equipment for the Basic Materials Laboratory (Building 106) using the IF'MP/Purchase Card (P-card), a NASA-wide software program. I entered into the IFMP/Travel and Requisitions System, new Travel Authorizations for the 5130-RMC Civil Servant Branch Members. I also entered and completed Travel Vouchers for the 5130-RMC Ceramics Branch. I assisted in the Division Office creating new Emergency Contact list for the Materials Division. I worked with Dr. Hugh Gray, the Division Chief, and Dr. Ajay Misra, the 5130-RMC Branch Chief, on priority action items, with a close deadline, for a large NASA Proposal. Another project was working closely with Ms. Rodriguez in organizing and preparing for Dr. Ajay K. Misra's SESCDP (two year detail). This consisted of organizing files, file folders, personal information, and recording all data material onto CD's and printing all presentations for display in binders. I attended numerous Branch meetings, and observed many changes in the Branch Management organization.

  7. Accelerator and Fusion Research Division 1989 summary of activities

    SciTech Connect

    Not Available

    1990-06-01

    This report discusses the research being conducted at Lawrence Berkeley Laboratory's Accelerator and Fusion Research Division. The main topics covered are: heavy-ion fusion accelerator research; magnetic fusion energy; advanced light source; center for x-ray optics; exploratory studies; high-energy physics technology; and bevalac operations.

  8. Paradigms in Physics: A New Upper-Division Curriculum.

    ERIC Educational Resources Information Center

    Manogue, Corinne A.; Siemens, Philip J.; Tate, Janet; Browne, Kerry; Niess, Margaret L.; Wolfer, Adam J.

    2001-01-01

    Describes a new curriculum for the final two years of a B.S. program in physics. Defines junior progress from a descriptive, lower-division understanding to an advanced analysis of a topic by phenomenon rather than discipline. (Contains 17 references.) (Author/YDS)

  9. Physics division annual report - October 2000.

    SciTech Connect

    Thayer, K.

    2000-10-16

    This report summarizes the research performed in the past year in the Argonne Physics Division. The Division's programs include operation of ATLAS as a national heavy-ion user facility, nuclear structure and reaction research with beams of heavy ions, accelerator research and development especially in superconducting radio frequency technology, nuclear theory and medium energy nuclear physics. The Division took significant strides forward in its science and its initiatives for the future in the past year. Major progress was made in developing the concept and the technology for the future advanced facility of beams of short-lived nuclei, the Rare Isotope Accelerator. The scientific program capitalized on important instrumentation initiatives with key advances in nuclear science. In 1999, the nuclear science community adopted the Argonne concept for a multi-beam superconducting linear accelerator driver as the design of choice for the next major facility in the field a Rare Isotope Accelerator (RIA) as recommended by the Nuclear Science Advisory Committee's 1996 Long Range Plan. Argonne has made significant R&D progress on almost all aspects of the design concept including the fast gas catcher (to allow fast fragmentation beams to be stopped and reaccelerated) that in large part, defined the RIA concept the superconducting rf technology for the driver accelerator, the multiple-charge-state concept (to permit the facility to meet the design intensity goals with existing ion-source technology), and designs and tests of high-power target concepts to effectively deal with the full beam power of the driver linac. An NSAC subcommittee recommended the Argonne concept and set as tie design goal Uranium beams of 100-kwatt power at 400 MeV/u. Argonne demonstrated that this goal can be met with an innovative, but technically in-hand, design.

  10. Chemical Technology Division annual technical report, 1993

    SciTech Connect

    Battles, J.E.; Myles, K.M.; Laidler, J.J.; Green, D.W.

    1994-04-01

    Chemical Technology (CMT) Division this period, conducted research and development in the following areas: advanced batteries and fuel cells; fluidized-bed combustion and coal-fired magnetohydrodynamics; treatment of hazardous waste and mixed hazardous/radioactive waste; reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; separating and recovering transuranic elements, concentrating radioactive waste streams with advanced evaporators, and producing {sup 99}Mo from low-enriched uranium; recovering actinide from IFR core and blanket fuel in removing fission products from recycled fuel, and disposing removal of actinides in spent fuel from commercial water-cooled nuclear reactors; and physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, molecular sieve structures, thin-film diamond surfaces, effluents from wood combustion, and molten silicates; and the geochemical processes involved in water-rock interactions. The Analytical Chemistry Laboratory in CMT also provides a broad range of analytical chemistry support.

  11. Environmental Sciences Division annual progress report for period ending September 30, 1982. Environmental Sciences Division Publication No. 2090. [Lead abstract

    SciTech Connect

    Not Available

    1983-04-01

    Separate abstracts were prepared for 12 of the 14 sections of the Environmental Sciences Division annual progress report. The other 2 sections deal with educational activities. The programs discussed deal with advanced fuel energy, toxic substances, environmental impacts of various energy technologies, biomass, low-level radioactive waste management, the global carbon cycle, and aquatic and terrestrial ecology. (KRM)

  12. High Energy Physics Division semiannual report of research activities, January 1, 1992--June 30, 1992

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1992-11-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1992--June 30, 1992. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  13. High Energy Physics Division semiannual report of research activities, July 1, 1992--December 30, 1992

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1993-07-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1992--December 30, 1992. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  14. High Energy Physics Division semiannual report of research activities, July 1, 1993--December 31, 1993

    SciTech Connect

    Wagner, R.; Moonier, P.; Schoessow, P.; Talaga, R.

    1994-05-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1993--December 31, 1993. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  15. High Energy Physics division semiannual report of research activities, January 1, 1998--June 30, 1998.

    SciTech Connect

    Ayres, D. S.; Berger, E. L.; Blair, R.; Bodwin, G. T.; Drake, G.; Goodman, M. C.; Guarino, V.; Klasen, M.; Lagae, J.-F.; Magill, S.; May, E. N.; Nodulman, L.; Norem, J.; Petrelli, A.; Proudfoot, J.; Repond, J.; Schoessow, P. V.; Sinclair, D. K.; Spinka, H. M.; Stanek, R.; Underwood, D.; Wagner, R.; White, A. R.; Yokosawa, A.; Zachos, C.

    1999-03-09

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1998 through June 30, 1998. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included.

  16. High Energy Physics Division semiannual report of research activities, July 1, 1991--December 31, 1991

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1992-04-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1991--December 31, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  17. High Energy Physics Division semiannual report of research activities July 1, 1997 - December 31, 1997.

    SciTech Connect

    Norem, J.; Rezmer, R.; Schuur, C.; Wagner, R.

    1998-08-11

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1997--December 31, 1997. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included.

  18. High Energy Physics Division semiannual report of research activities, January 1, 1996--June 30, 1996

    SciTech Connect

    Norem, J.; Rezmer, R.; Wagner, R.

    1997-07-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1 - June 30, 1996. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. List of Division publications and colloquia are included.

  19. Division rules for polygonal cells.

    PubMed

    Cowan, R; Morris, V B

    1988-03-07

    A number of fascinating mathematical problems concerning the division of two-dimensional space are formulated from questions about the planes of cell division in embryonic epithelia. Their solution aids in the quantitative description of cellular arrangement in epithelia. Cells, considered as polygons, site their division line according to stochastic rules, eventually forming a tessellation of the plane. The equilibrium distributions for the resulting mix of polygonal types are explored for a range of stochastic rules. We find surprising links with some classical distributions from the theory of probability.

  20. Physics division annual report 2006.

    SciTech Connect

    Glover, J.; Physics

    2008-02-28

    This report highlights the activities of the Physics Division of Argonne National Laboratory in 2006. The Division's programs include the operation as a national user facility of ATLAS, the Argonne Tandem Linear Accelerator System, research in nuclear structure and reactions, nuclear astrophysics, nuclear theory, investigations in medium-energy nuclear physics as well as research and development in accelerator technology. The mission of nuclear physics is to understand the origin, evolution and structure of baryonic matter in the universe--the core of matter, the fuel of stars, and the basic constituent of life itself. The Division's research focuses on innovative new ways to address this mission.

  1. "Structure and dynamics in complex chemical systems: Gaining new insights through recent advances in time-resolved spectroscopies.” ACS Division of Physical Chemistry Symposium presented at the Fall National ACS Meeting in Boston, MA, August 2015

    SciTech Connect

    Crawford, Daniel

    2016-09-26

    8-Session Symposium on STRUCTURE AND DYNAMICS IN COMPLEX CHEMICAL SYSTEMS: GAINING NEW INSIGHTS THROUGH RECENT ADVANCES IN TIME-RESOLVED SPECTROSCOPIES. The intricacy of most chemical, biochemical, and material processes and their applications are underscored by the complex nature of the environments in which they occur. Substantial challenges for building a global understanding of a heterogeneous system include (1) identifying unique signatures associated with specific structural motifs within the heterogeneous distribution, and (2) resolving the significance of each of multiple time scales involved in both small- and large-scale nuclear reorganization. This symposium focuses on the progress in our understanding of dynamics in complex systems driven by recent innovations in time-resolved spectroscopies and theoretical developments. Such advancement is critical for driving discovery at the molecular level facilitating new applications. Broad areas of interest include: Structural relaxation and the impact of structure on dynamics in liquids, interfaces, biochemical systems, materials, and other heterogeneous environments.

  2. NASA Planetary Science Division's Instrument Development Programs, PICASSO and MatISSE

    NASA Astrophysics Data System (ADS)

    Gaier, J. R.

    2016-10-01

    The NASA Planetary Science Division's instrument development programs, Planetary Instrument Concept Advancing Solar System Observations (PICASSO), and Maturation of Instruments for Solar System Exploration Program (MatISSE), are described.

  3. Overview of NASA Glenn Research Center's Communications and Intelligent Systems Division

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2016-01-01

    The Communications and Intelligent Systems Division provides expertise, plans, conducts and directs research and engineering development in the competency fields of advanced communications and intelligent systems technologies for application in current and future aeronautics and space systems.

  4. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1983-07-01

    This report describes some of the activities in E (Experimental Physics) Division during the past year. E-division carries out research and development in areas related to the missions of the Laboratory. Many of the activities are in pure and applied atomic and nuclear physics and in materials science. In addition, this report describes development work on accelerators and on instrumentation for plasma diagnostics, nitrogen exchange rates in tissue, and breakdown in gases by microwave pulses.

  5. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1981-07-01

    This report describes some of the activities in E (Experimental Physics) Division during the past year. E-Division carries out research and development in areas related to the missions of the Laboratory. Many of the activities are in pure and applied atomic and nuclear physics and in material science. In addition this report describes work on accelerators, microwaves, plasma diagnostics, determination of atmospheric oxygen and of nitrogen in tissue.

  6. The ASCI Network for SC '98: Dense Wave Division Multiplexing for Distributed and Distance Computing

    SciTech Connect

    Adams, R.L.; Butman, W.; Martinez, L.G.; Pratt, T.J.; Vahle, M.O.

    1999-06-01

    This document highlights the DISCOM's Distance computing and communication team activities at the 1998 Supercomputing conference in Orlando, Florida. This conference is sponsored by the IEEE and ACM. Sandia National Laboratories, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory have participated in this conference for ten years. For the last three years, the three laboratories have a joint booth at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives. The DISCOM communication team uses the forum to demonstrate and focus communications and networking developments. At SC '98, DISCOM demonstrated the capabilities of Dense Wave Division Multiplexing. We exhibited an OC48 ATM encryptor. We also coordinated the other networking activities within the booth. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support overall strategies in ATM networking.

  7. Accelerator & Fusion Research Division: 1993 Summary of activities

    SciTech Connect

    Chew, J.

    1994-04-01

    The Accelerator and Fusion Research Division (AFRD) is not only one of the largest scientific divisions at LBL, but also the one of the most diverse. Major efforts include: (1) investigations in both inertial and magnetic fusion energy; (2) operation of the Advanced Light Source, a state-of-the-art synchrotron radiation facility; (3) exploratory investigations of novel radiation sources and colliders; (4) research and development in superconducting magnets for accelerators and other scientific and industrial applications; and (5) ion beam technology development for nuclear physics and for industrial and biomedical applications. Each of these topics is discussed in detail in this book.

  8. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    SciTech Connect

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  9. Seismic Sensors to Supercomputers: Internet Mapping and Computational Tools for Teaching and Learning about Earthquakes and the Structure of the Earth from Seismology

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Seber, D.; Hamburger, M.

    2004-12-01

    The Internet has become an integral resource in the classrooms and homes of teachers and students. Widespread Web-access to seismic data and analysis tools enhances opportunities for teaching and learning about earthquakes and the structure of the earth from seismic tomography. We will present an overview and demonstration of the UNAVCO Voyager Java- and Javascript-based mapping tools (jules.unavco.org) and the Cornell University/San Diego Supercomputer Center (www.discoverourearth.org) Java-based data analysis and mapping tools. These map tools, datasets, and related educational websites have been developed and tested by collaborative teams of scientific programmers, research scientists, and educators. Dual-use by research and education communities ensures persistence of the tools and data, motivates on-going development, and encourages fresh content. With these tools are curricular materials and on-going evaluation processes that are essential for an effective application in the classroom. The map tools provide not only seismological data and tomographic models of the earth's interior, but also a wealth of associated map data such as topography, gravity, sea-floor age, plate tectonic motions and strain rates determined from GPS geodesy, seismic hazard maps, stress, and a host of geographical data. These additional datasets help to provide context and enable comparisons leading to an integrated view of the planet and the on-going processes that shape it. Emerging Cyberinfrastructure projects such as the NSF-funded GEON Information Technology Research project (www.geongrid.org) are developing grid/web services, advanced visualization software, distributed databases and data sharing methods, concept-based search mechanisms, and grid-computing resources for earth science and education. These developments in infrastructure seek to extend the access to data and to complex modeling tools from the hands of a few researchers to a much broader set of users. The GEON

  10. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  11. Beyond Cookies: Understanding Various Division Models

    ERIC Educational Resources Information Center

    Jong, Cindy; Magruder, Robin

    2014-01-01

    Having a deeper understanding of division derived from multiple models is of great importance for teachers and students. For example, students will benefit from a greater understanding of division contexts as they study long division, fractions, and division of fractions. The purpose of this article is to build on teachers' and students'…

  12. Chemical Technology Division annual technical report 1989

    SciTech Connect

    Not Available

    1990-03-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1989 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including high-performance batteries (mainly lithium/iron sulfide and sodium/metal chloride), aqueous batteries (lead-acid and nickel/iron), and advanced fuel cells with molten carbonate and solid oxide electrolytes: (2) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants and the technology for fluidized-bed combustion; (3) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (4) nuclear technology related to a process for separating and recovering transuranic elements from nuclear waste and for producing {sup 99}Mo from low-enriched uranium targets, the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor (the Integral Fast Reactor), and waste management; and (5) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for superconducting oxides and associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be administratively responsible for and the major user of the Analytical Chemistry Laboratory at Argonne National Laboratory (ANL).

  13. Laboratory Astrophysics Division of The AAS (LAD)

    NASA Astrophysics Data System (ADS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-10-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  14. Laboratory Astrophysics Division of the AAS (LAD)

    NASA Technical Reports Server (NTRS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-01-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  15. Chemical technology division: Annual technical report 1987

    SciTech Connect

    Not Available

    1988-05-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1987 are presented. In this period, CMT conducted research and development in the following areas: (1) high-performance batteries--mainly lithium-alloy/metal sulfide and sodium/sulfur; (2) aqueous batteries (lead-acid, nickel/iron, etc.); (3) advanced fuel cells with molten carbonate or solid oxide electrolytes; (4) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants and the technology for fluidized-bed combustion; (5) methods for the electromagnetic continuous casting of steel sheet and for the purification of ferrous scrap; (6) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (7) nuclear technology related to a process for separating and recovering transuranic elements from nuclear waste, the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor, and waste management; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for liquids and vapors at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; the thermochemistry of various minerals; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be the major user of the technical support provided by the Analytical Chemistry Laboratory at ANL. 54 figs., 9 tabs.

  16. Chemical Technology Division annual technical report, 1986

    SciTech Connect

    Not Available

    1987-06-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1986 are presented. In this period, CMT conducted research and development in areas that include the following: (1) high-performance batteries - mainly lithium-alloy/metal sulfide and sodium/sulfur; (2) aqueous batteries (lead-acid, nickel/iron, etc.); (3) advanced fuel cells with molten carbonate or solid oxide electrolytes; (4) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants, the technology for fluidized-bed combustion, and a novel concept for CO/sub 2/ recovery from fossil fuel combustion; (5) methods for recovery of energy from municipal waste; (6) methods for the electromagnetic continuous casting of steel sheet; (7) techniques for treatment of hazardous waste such as reactive metals and trichloroethylenes; (8) nuclear technology related to waste management, a process for separating and recovering transuranic elements from nuclear waste, and the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor; and (9) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of catalytic hydrogenation and catalytic oxidation; materials chemistry for associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, surface science, and catalysis; the thermochemistry of zeolites and related silicates; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be the major user of the technical support provided by the Analytical Chemistry Laboratory at ANL. 127 refs., 71 figs., 8 tabs.

  17. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  18. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  19. Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert

    2006-01-01

    It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.

  20. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    SciTech Connect

    Helland, B.; Summers, B.G.

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  1. Architecture and design of a 500-MHz gallium-arsenide processing element for a parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.; Butner, Steven E.

    1991-01-01

    The design of the processing element of GASP, a GaAs supercomputer with a 500-MHz instruction issue rate and 1-GHz subsystem clocks, is presented. The novel, functionally modular, block data flow architecture of GASP is described. The architecture and design of a GASP processing element is then presented. The processing element (PE) is implemented in a hybrid semiconductor module with 152 custom GaAs ICs of eight different types. The effects of the implementation technology on both the system-level architecture and the PE design are discussed. SPICE simulations indicate that parts of the PE are capable of being clocked at 1 GHz, while the rest of the PE uses a 500-MHz clock. The architecture utilizes data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture indicates that an instruction execution rate of over 30,000 MIPS can be attained with 65 PEs.

  2. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    SciTech Connect

    Swaminarayan, Sriram; Germann, Timothy C; Kadau, Kai; Fossum, Gordon C

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  3. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    NASA Astrophysics Data System (ADS)

    de Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu, Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-02-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  4. A user-friendly web portal for T-Coffee on supercomputers

    PubMed Central

    2011-01-01

    Background Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution. PMID:21569428

  5. Massively-parallel electrical-conductivity imaging of hydrocarbonsusing the Blue Gene/L supercomputer

    SciTech Connect

    Commer, M.; Newman, G.A.; Carazzone, J.J.; Dickens, T.A.; Green,K.E.; Wahrmund, L.A.; Willen, D.E.; Shiu, J.

    2007-05-16

    Large-scale controlled source electromagnetic (CSEM)three-dimensional (3D) geophysical imaging is now receiving considerableattention for electrical conductivity mapping of potential offshore oiland gas reservoirs. To cope with the typically large computationalrequirements of the 3D CSEM imaging problem, our strategies exploitcomputational parallelism and optimized finite-difference meshing. Wereport on an imaging experiment, utilizing 32,768 tasks/processors on theIBM Watson Research Blue Gene/L (BG/L) supercomputer. Over a 24-hourperiod, we were able to image a large scale marine CSEM field data setthat previously required over four months of computing time ondistributed clusters utilizing 1024 tasks on an Infiniband fabric. Thetotal initial data misfit could be decreased by 67 percent within 72completed inversion iterations, indicating an electrically resistiveregion in the southern survey area below a depth of 1500 m below theseafloor. The major part of the residual misfit stems from transmitterparallel receiver components that have an offset from the transmittersail line (broadside configuration). Modeling confirms that improvedbroadside data fits can be achieved by considering anisotropic electricalconductivities. While delivering a satisfactory gross scale image for thedepths of interest, the experiment provides important evidence for thenecessity of discriminating between horizontal and verticalconductivities for maximally consistent 3D CSEM inversions.

  6. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    SciTech Connect

    Bethel, E Wes; Brugger, Eric

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources - the 'Big Iron.' Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be - that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  7. Simulating the Dynamics of Earth's Core: Using NCCS Supercomputers Speeds Calculations

    NASA Technical Reports Server (NTRS)

    2002-01-01

    If one wanted to study Earth's core directly, one would have to drill through about 1,800 miles of solid rock to reach liquid core-keeping the tunnel from collapsing under pressures that are more than 1 million atmospheres and then sink an instrument package to the bottom that could operate at 8,000 F with 10,000 tons of force crushing every square inch of its surface. Even then, several of these tunnels would probably be needed to obtain enough data. Faced with difficult or impossible tasks such as these, scientists use other available sources of information - such as seismology, mineralogy, geomagnetism, geodesy, and, above all, physical principles - to derive a model of the core and, study it by running computer simulations. One NASA researcher is doing just that on NCCS computers. Physicist and applied mathematician Weijia Kuang, of the Space Geodesy Branch, and his collaborators at Goddard have what he calls the,"second - ever" working, usable, self-consistent, fully dynamic, three-dimensional geodynamic model (see "The Geodynamic Theory"). Kuang runs his model simulations on the supercomputers at the NCCS. He and Jeremy Bloxham, of Harvard University, developed the original version, written in Fortran 77, in 1996.

  8. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    PubMed

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  9. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    SciTech Connect

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  10. A Framework for HI Spectral Source Finding Using Distributed-Memory Supercomputing

    NASA Astrophysics Data System (ADS)

    Westerlund, Stefan; Harris, Christopher

    2014-05-01

    The latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was implemented using SSoFF, utilising Gaussian filters, thresholding, and local statistics. PGSF was able to search on a 256GB simulated dataset in under 24 minutes, significantly less than the 8 to 12 hour observation that would generate such a dataset.

  11. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  12. Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer

    NASA Astrophysics Data System (ADS)

    Shen, B.; Tao, W.; Atlas, R.

    2006-12-01

    It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model's performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.

  13. Divisions of geologic time-major chronostratigraphic and geochronologic units

    USGS Publications Warehouse

    ,

    2010-01-01

    Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and is calibrated in years. Over the years, the development of new dating methods and the refinement of previous methods have stimulated revisions to geologic time scales. Advances in stratigraphy and geochronology require that any time scale be periodically updated. Therefore, Divisions of Geologic Time, which shows the major chronostratigraphic (position) and geochronologic (time) units, is intended to be a dynamic resource that will be modified to include accepted changes of unit names and boundary age estimates. This fact sheet is a modification of USGS Fact Sheet 2007-3015 by the U.S. Geological Survey Geologic Names Committee.

  14. Performance Evaluation of the Electrostatic Particle-in-Cell Code hPIC on the Blue Waters Supercomputer

    NASA Astrophysics Data System (ADS)

    Khaziev, Rinat; Mokos, Ryan; Curreli, Davide

    2016-10-01

    The newly-developed hPIC code is a kinetic-kinetic electrostatic Particle-in-Cell application, targeted at large-scale simulations of Plasma-Material Interactions. The code can simulate multi-component strongly-magnetized plasmas in a region close to the wall, including the magnetic sheath/presheath and the first surface layers, which release material impurities. The Poisson solver is based on PETSc conjugate gradient with BoomerAMG algebraic multigrid preconditioners. Scaling tests on the Blue Waters supercomputer have demonstrated good strong-scaling up to 262,144 cores and excellent weak-scaling (tested up to 64,000 cores). In this presentation, we will make an overview of the on-node optimization activities and the main code features, as well as provide a detailed analysis of the results of the verification tests performed. Work supported by the NCSA Faculty Fellowship Program at the National Center for Supercomputing Applications; supercomputing resources provided by Exploratory Blue Waters Allocation.

  15. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    NASA Astrophysics Data System (ADS)

    Kennedy, J. A.; Kluth, S.; Mazzaferro, L.; Walker, Rodney

    2015-12-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

  16. Chemical Technology Division, Annual technical report, 1991

    SciTech Connect

    Not Available

    1992-03-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1991 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources; chemistry of superconducting oxides and other materials of interest with technological application; interfacial processes of importance to corrosion science, catalysis, and high-temperature superconductivity; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  17. Effects of Polyhydroxybutyrate Production on Cell Division

    NASA Technical Reports Server (NTRS)

    Miller, Kathleen; Rahman, Asif; Hadi, Masood Z.

    2015-01-01

    Synthetic biological engineering can be utilized to aide the advancement of improved long-term space flight. The potential to use synthetic biology as a platform to biomanufacture desired equipment on demand using the three dimensional (3D) printer on the International Space Station (ISS) gives long-term NASA missions the flexibility to produce materials as needed on site. Polyhydroxybutyrates (PHBs) are biodegradable, have properties similar to plastics, and can be produced in Escherichia coli using genetic engineering. Using PHBs during space flight could assist mission success by providing a valuable source of biomaterials that can have many potential applications, particularly through 3D printing. It is well documented that during PHB production E. coli cells can become significantly elongated. The elongation of cells reduces the ability of the cells to divide and thus to produce PHB. I aim to better understand cell division during PHB production, through the design, building, and testing of synthetic biological circuits, and identify how to potentially increase yields of PHB with FtsZ overexpression, the gene responsible for cell division. Ultimately, an increase in the yield will allow more products to be created using the 3D printer on the ISS and beyond, thus aiding astronauts in their missions.

  18. Building an Academic Colorectal Division

    PubMed Central

    Koltun, Walter A.

    2014-01-01

    Colon and rectal surgery is fully justified as a valid subspecialty within academic university health centers, but such formal recognition at the organizational level is not the norm. Creating a colon and rectal division within a greater department of surgery requires an unfailing commitment to academic concepts while promulgating the improvements that come in patient care, research, and teaching from a specialty service perspective. The creation of divisional identity then opens the door for a strategic process that will grow the division even more as well as provide benefits to the institution within which it resides. The fundamentals of core values, academic commitment, and shared success reinforced by receptive leadership are critical. Attention to culture, commitment, collaboration, control, cost, and compensation leads to a successful academic division of colon and rectal surgery. PMID:25067922

  19. A site oriented supercomputer for theoretical physics: The Fermilab Advanced Computer Program Multi Array Processor System (ACMAPS)

    SciTech Connect

    Nash, T.; Atac, R.; Cook, A.; Deppe, J.; Fischler, M.; Gaines, I.; Husby, D.; Pham, T.; Zmuda, T.; Eichten, E.

    1989-03-06

    The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. The system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.

  20. Advanced planetary studies

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Results of planetary advanced studies and planning support provided by Science Applications, Inc. staff members to Earth and Planetary Exploration Division, OSSA/NASA, for the period 1 February 1981 to 30 April 1982 are summarized. The scope of analyses includes cost estimation, planetary missions performance, solar system exploration committee support, Mars program planning, Galilean satellite mission concepts, and advanced propulsion data base. The work covers 80 man-months of research. Study reports and related publications are included in a bibliography section.

  1. GSFC Heliophysics Science Division 2009 Science Highlights

    NASA Technical Reports Server (NTRS)

    Strong, Keith T.; Saba, Julia L. R.; Strong, Yvonne M.

    2009-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2009, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 299 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include: Leading science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Leading the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Providing access to measurements from the Heliophysics Great Observatory through our Science Information Systems; and Communicating science results to the public and inspiring the next generation of scientists and explorers.

  2. GSFC Heliophysics Science Division 2008 Science Highlights

    NASA Technical Reports Server (NTRS)

    Gilbert, Holly R.; Strong, Keith T.; Saba, Julia L. R.; Firestone, Elaine R.

    2009-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2008, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 261 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include Lead science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Lead the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Provide access to measurements from the Heliophysics Great Observatory through our Science Information Systems, and Communicate science results to the public and inspire the next generation of scientists and explorers.

  3. Advanced fossil energy utilization

    SciTech Connect

    Shekhawat, D.; Berry, D.; Spivey, J.; Pennline, H.; Granite, E.

    2010-01-01

    This special issue of Fuel is a selection of papers presented at the symposium ‘Advanced Fossil Energy Utilization’ co-sponsored by the Fuels and Petrochemicals Division and Research and New Technology Committee in the 2009 American Institute of Chemical Engineers (AIChE) Spring National Meeting Tampa, FL, on April 26–30, 2009.

  4. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  5. Manpower Division Looks at CETA

    ERIC Educational Resources Information Center

    American Vocational Journal, 1977

    1977-01-01

    The Manpower Division at the American Vocational Association (AVA) convention in Houston was concerned about youth unemployment and about the Comprehensive Employment and Training Act (CETA)--its problems and possibilities. The panel discussion reported here reveals some differing perspectives and a general consensus--that to improve their role in…

  6. Home | Division of Cancer Prevention

    Cancer.gov

    Our Research The Division of Cancer Prevention (DCP) conducts and supports research to determine a person's risk of cancer and to find ways to reduce the risk. This knowledge is critical to making progress against cancer because risk varies over the lifespan as genetic and epigenetic changes can transform healthy tissue into cancer. |

  7. Divisions of geologic time (Bookmark)

    USGS Publications Warehouse

    ,

    2012-05-03

    DescriptionThis bookmark, designed for use with U.S. Geological Survey activities at the second USA Science and Engineering Festival (April 26–29, 2012), is adapted from the more detailed Fact Sheet 2010–3059 "Divisions of Geologic Time." The information that it presents is widely sought by educators and students.

  8. Understanding Partitive Division of Fractions.

    ERIC Educational Resources Information Center

    Ott, Jack M.; And Others

    1991-01-01

    Concrete experience should be a first step in the development of new abstract concepts and their symbolization. Presents concrete activities based on Hyde and Nelson's work with egg cartons and Steiner's work with money to develop students' understanding of partitive division when using fractions. (MDH)

  9. Environmental Transport Division: 1979 report

    SciTech Connect

    Murphy, C.E. Jr.; Schubert, J.F.; Bowman, W.W.; Adams, S.E.

    1980-03-01

    During 1979, the Environmental Transport Division (ETD) of the Savannah River Laboratory conducted atmospheric, terrestrial, aquatic, and marine studies, which are described in a series of articles. Separate abstracts were prepared for each. Publications written about the 1979 research are listed at the end of the report.

  10. 75 FR 16178 - Antitrust Division

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-31

    ..., 15 U.S.C. 4301 et seq. (``the Act''), Joint Venture Agreement Between Cambridge Major Laboratories... Antitrust Division Notice Pursuant to the National Cooperative Research and Production Act of 1993--Joint Venture Agreement Between Cambridge Major Laboratories, Inc. and Konarka Technologies, Inc.,...

  11. Technology advances for magnetic bearings

    NASA Astrophysics Data System (ADS)

    Nolan, Steve; Hung, John Y.

    1996-03-01

    This paper describes the state-of-the-art in magnetic bearing technology and applications, and some of advances under development through the joint efforts of Rocketdyne Division of Rockwell International and Auburn University. Advances in the areas of nonlinear control systems design, digital controller implementation, and power electronics are discussed.

  12. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  13. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Ethier, Stephane

    2007-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  14. Influence of Earth crust composition on continental collision style in Precambrian conditions: Results of supercomputer modelling

    NASA Astrophysics Data System (ADS)

    Zavyalov, Sergey; Zakharov, Vladimir

    2016-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to

  15. Adventures in supercomputing, a K-12 program in computational science: An assessment

    SciTech Connect

    Oliver, C.E.; Hicks, H.R.; Iles-Brechak, K.D.; Honey, M.; McMillan, K.

    1994-10-01

    In this paper, the authors describe only those elements of the Department of Energy Adventures in Supercomputing (AiS) program for high school teachers, such as school selection, which have a direct bearing on assessment. Schools submit an application to participate in the AiS program. They propose a team of at least two teachers to implement the AiS curriculum. The applications are evaluated by selection committees in each of the five participating states to determine which schools are the most qualified to carry out the program and reach a significant number of women, minorities, and economically disadvantaged students, all of whom have historically been underrepresented in the sciences. Typically, selected schools either have a large disadvantaged student population, or the applying teachers propose specific means to attract these segments of their student body into AiS classes. Some areas with AiS schools have significant numbers of minority students, some have economically disadvantaged, usually rural, students, and all areas have the potential to reach a higher proportion of women than technical classes usually attract. This report presents preliminary findings based on three types of data: demographic, student journals, and contextual. Demographic information is obtained for both students and teachers. Students have been asked to maintain journals which include replies to specific questions that are posed each month. An analysis of the answers to these questions helps to form a picture of how students progress through the course of the school year. Onsite visits by assessment professionals conducting student and teacher interviews, provide a more in depth, qualitative basis for understanding student motivations.

  16. Young Kim, PhD | Division of Cancer Prevention

    Cancer.gov

    Young S Kim, PhD, joined the Division of Cancer Prevention at the National Cancer Institute in 1998 as a Program Director who oversees and monitors NCI grants in the area of Nutrition and Cancer. She serves as an expert in nutrition, molecular biology, and genomics as they relate to cancer prevention. Dr. Kim assists with research initiatives that will advance nutritional science and lead to human health benefits. |

  17. 76 FR 4724 - Emerson Transportation Division, a Division of Emerson Electric, Including Workers Located...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ..., Including Workers Located Throughout the United States; Bridgeton, MO; Amended Certification Regarding... Emerson Transportation Division, a division of Emerson Electric, including workers located throughout...

  18. Advanced Pacemaker

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Synchrony, developed by St. Jude Medical's Cardiac Rhythm Management Division (formerly known as Pacesetter Systems, Inc.) is an advanced state-of-the-art implantable pacemaker that closely matches the natural rhythm of the heart. The companion element of the Synchrony Pacemaker System is the Programmer Analyzer APS-II which allows a doctor to reprogram and fine tune the pacemaker to each user's special requirements without surgery. The two-way communications capability that allows the physician to instruct and query the pacemaker is accomplished by bidirectional telemetry. APS-II features 28 pacing functions and thousands of programming combinations to accommodate diverse lifestyles. Microprocessor unit also records and stores pertinent patient data up to a year.

  19. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  20. Health, Safety, and Environment Division

    SciTech Connect

    Wade, C

    1992-01-01

    The primary responsibility of the Health, Safety, and Environmental (HSE) Division at the Los Alamos National Laboratory is to provide comprehensive occupational health and safety programs, waste processing, and environmental protection. These activities are designed to protect the worker, the public, and the environment. Meeting these responsibilities requires expertise in many disciplines, including radiation protection, industrial hygiene, safety, occupational medicine, environmental science and engineering, analytical chemistry, epidemiology, and waste management. New and challenging health, safety, and environmental problems occasionally arise from the diverse research and development work of the Laboratory, and research programs in HSE Division often stem from these applied needs. These programs continue but are also extended, as needed, to study specific problems for the Department of Energy. The results of these programs help develop better practices in occupational health and safety, radiation protection, and environmental science.

  1. Advanced Electronic Technology.

    DTIC Science & Technology

    1978-11-15

    It AD AObS 062 MASSACHUSETTS INST OF TECH LEXINGTON LINCOLN LAB F/S 9/S ADVANCED ELECTRONIC TECHNOLOGY .(U) NOV 78 A J MCLAUGHLIN. A L MCWHORTER...T I T U T E OF T E C H N O L O G Y L I N C O L N L A B O R A T O R Y ADVANCED ELECTRONIC TECHNOLOGY QUARTERLY TECKNICAL SUMMAR Y REPORT TO THE AIR...Division 8 (Solid State) on the Advanced Electronic Technology Program. Hi

  2. Water Resources Division training catalog

    USGS Publications Warehouse

    Hotchkiss, W.R.; Foxhoven, L.A.

    1984-01-01

    The National Training Center provides technical and management sessions nesessary for the conductance of the U.S. Geological Survey 's training programs. This catalog describes the facilities and staff at the Lakewood Training Center and describes Water Resources Division training courses available through the center. In addition, the catalog describes the procedures for gaining admission, formulas for calculating fees, and discussion of course evaluations. (USGS)

  3. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    SciTech Connect

    Sunderam, V.S. . Dept. of Mathematics and Computer Science); Geist, G.A. )

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  4. High Energy Physics Division semiannual report of research activities. Semi-annual progress report, July 1, 1995--December 31, 1995

    SciTech Connect

    Norem, J.; Bajt, D.; Rezmer, R.; Wagner, R.

    1996-10-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1995 - December 31, 1995. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  5. Connecting the dots of the bacterial cell cycle: Coordinating chromosome replication and segregation with cell division.

    PubMed

    Hajduk, Isabella V; Rodrigues, Christopher D A; Harry, Elizabeth J

    2016-05-01

    Proper division site selection is crucial for the survival of all organisms. What still eludes us is how bacteria position their division site with high precision, and in tight coordination with chromosome replication and segregation. Until recently, the general belief, at least in the model organisms Bacillus subtilis and Escherichia coli, was that spatial regulation of division comes about by the combined negative regulatory mechanisms of the Min system and nucleoid occlusion. However, as we review here, these two systems cannot be solely responsible for division site selection and we highlight additional regulatory mechanisms that are at play. In this review, we put forward evidence of how chromosome replication and segregation may have direct links with cell division in these bacteria and the benefit of recent advances in chromosome conformation capture techniques in providing important information about how these three processes mechanistically work together to achieve accurate generation of progenitor cells.

  6. Novel insights into mammalian embryonic neural stem cell division: focus on microtubules.

    PubMed

    Mora-Bermúdez, Felipe; Huttner, Wieland B

    2015-12-01

    During stem cell divisions, mitotic microtubules do more than just segregate the chromosomes. They also determine whether a cell divides virtually symmetrically or asymmetrically by establishing spindle orientation and the plane of cell division. This can be decisive for the fate of the stem cell progeny. Spindle defects have been linked to neurodevelopmental disorders, yet the role of spindle orientation for mammalian neurogenesis has remained controversial. Here we explore recent advances in understanding how the microtubule cytoskeleton influences mammalian neural stem cell division. Our focus is primarily on the role of spindle microtubules in the development of the cerebral cortex. We also highlight unique characteristics in the architecture and dynamics of cortical stem cells that are tightly linked to their mode of division. These features contribute to setting these cells apart as mitotic "rule breakers," control how asymmetric a division is, and, we argue, are sufficient to determine the fate of the neural stem cell progeny in mammals.

  7. IFLA Advisory Group on Division 8.

    ERIC Educational Resources Information Center

    Bloss, Marjorie E.; Hegedus, Peter; Law, Derek; Nilsen, Sissel; Raseroka, Kay; Rodriguez, Adolfo; Wu, Jianzhong

    Following the 1999 IFLA (International Federation of Library Associations and Institutions) Conference, the Executive Board established an Advisory group to examine issues that were raised concerning Division 8, specifically the recommendation to mainstream Section 8 activities with the other seven divisions, thus dissolving this division. This…

  8. 7 CFR 29.16 - Division.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... INSPECTION Regulations Definitions § 29.16 Division. Tobacco Division, Agricultural Marketing Service, U.S... 7 Agriculture 2 2010-01-01 2010-01-01 false Division. 29.16 Section 29.16 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections,...

  9. Unemployment and Household Division of Labor.

    ERIC Educational Resources Information Center

    Shamir, Boas

    1986-01-01

    Addresses the relationship between unemployment of men and women and the division of labor in their households and how the psychological well-being of unemployed individuals related to the division of labor in their families. Changes in the employment status of men and women had only limited effects on household division of labor. (Author/ABL)

  10. 7 CFR 29.16 - Division.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Division. 29.16 Section 29.16 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Definitions § 29.16 Division. Tobacco Division, Agricultural Marketing Service,...

  11. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.

    PubMed

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect

  12. Chemical Technology Division annual technical report, 1990

    SciTech Connect

    Not Available

    1991-05-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1990 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for coal- fired magnetohydrodynamics and fluidized-bed combustion; (3) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for a high-level waste repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams, concentrating plutonium solids in pyrochemical residues by aqueous biphase extraction, and treating natural and process waters contaminated by volatile organic compounds; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for superconducting oxides and associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; and the geochemical processes responsible for trace-element migration within the earth's crust. The Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the scientific and engineering programs at Argonne National Laboratory (ANL). 66 refs., 69 figs., 6 tabs.

  13. 75 FR 16843 - Core Manufacturing, Multi-Plastics, Inc., Division, Sipco, Inc., Division, Including Leased...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ... Employment and Training Administration Core Manufacturing, Multi-Plastics, Inc., Division, Sipco, Inc..., 2009, applicable to workers of Core Manufacturing, Multi-Plastics, Inc., Division and Sipco, Inc... of Core Manufacturing, Multi-Plastics, Inc., Division and Sipco, Inc., Division, including...

  14. Activities of the Structures Division, Lewis Research Center

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose of the NASA Lewis Research Center, Structures Division's 1990 Annual Report is to give a brief, but comprehensive, review of the technical accomplishments of the Division during the past calendar year. The report is organized topically to match the Center's Strategic Plan. Over the years, the Structures Division has developed the technology base necessary for improving the future of aeronautical and space propulsion systems. In the future, propulsion systems will need to be lighter, to operate at higher temperatures and to be more reliable in order to achieve higher performance. Achieving these goals is complex and challenging. Our approach has been to work cooperatively with both industry and universities to develop the technology necessary for state-of-the-art advancement in aeronautical and space propulsion systems. The Structures Division consists of four branches: Structural Mechanics, Fatigue and Fracture, Structural Dynamics, and Structural Integrity. This publication describes the work of the four branches by three topic areas of Research: (1) Basic Discipline; (2) Aeropropulsion; and (3) Space Propulsion. Each topic area is further divided into the following: (1) Materials; (2) Structural Mechanics; (3) Life Prediction; (4) Instruments, Controls, and Testing Techniques; and (5) Mechanisms. The publication covers 78 separate topics with a bibliography containing 159 citations. We hope you will find the publication interesting as well as useful.

  15. Annual Advances in Cancer Prevention Lecture | Division of Cancer Prevention

    Cancer.gov

    2016 Keynote Lecture Polyvalent Vaccines Targeting Oncogenic Driver Pathways A special keynote lecture became part of the NCI Summer Curriculum in Cancer Prevention in 2000. This lecture will be held on Thursday, July 21, 2016 at 1:30pm at Masur Auditorium, Building 10, NIH Main Campus, Bethesda, MD. This year’s keynote speaker is Dr. Mary L. (Nora) Disis, MD. |

  16. Annual Advances in Cancer Prevention Lecture | Division of Cancer Prevention

    Cancer.gov

    2015 Keynote Lecture HPV Vaccination: Preventing More with Less A special keynote lecture became part of the NCI summer Curriculum in Cancer Prevention in 2000. This lecture will be held on Thursday, July 23, 2015 at 3:00pm at Masur Auditorium, Building 10, NIH Main Campus, Bethesda, MD. This year’s keynote speaker is Dr. Douglas Lowy, NCI Acting Director. |

  17. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Xiang, Fang; Ningqiu, Li; Xiaozhe, Fu; Kaibin, Li; Qiang, Lin; Lihui, Liu; Cunbin, Shi; Shuqin, Wu

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.

  18. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc.

  19. Description of the programs and facilities of the Physics Division

    SciTech Connect

    Not Available

    1992-10-01

    The major emphasis of our experimental nuclear physics research is in Heavy-Ion Physics, centered at the recently completed ATLAS facility. ATLAS is a designated National User Facility and is based on superconducting radio-frequency technology developed in the Physics Division. In addition, the Division has strong programs in Medium-Energy Physics and in Weak-Interaction Physics as well as in accelerator development. Our nuclear theory research spans a wide range of interests including nuclear dynamics with subnucleonic degrees of freedom, dynamics of many-nucleon systems, nuclear structure, and heavy-ion interactions. This research makes contact with experimental research programs in intermediate-energy and heavy-ion physics, both within the Division and on the national scale. The Atomic Physics program, the largest of which is accelerator-based, primarily uses ATLAS, a 5-MV Dynamitron accelerator and a highly stable 150-kV accelerator. A synchrotron-based atomic physics program has recently been initiated with current research with the National Synchrotron Light Source in preparation for a program at the Advanced Photon Source, at Argonne. The principal interests of the Atomic Physics program are in the interactions of fast atomic and molecular ions with solids and gases and in the laser spectroscopy of exotic species. The program is currently being expanded to take advantage of the unique research opportunities in synchrotron-based research that will present themselves when the Advanced Photon Source comes on line at Argonne. These topics are discussed briefly in this report.

  20. ARC3 is a stromal Z-ring accessory protein essential for plastid division

    PubMed Central

    Maple, Jodi; Vojta, Lea; Soll, Jurgen; Møller, Simon G

    2007-01-01

    In plants, chloroplast division is an integral part of development, and these vital organelles arise by binary fission from pre-existing cytosolic plastids. Chloroplasts arose by endosymbiosis and although they have retained elements of the bacterial cell division machinery to execute plastid division, they have evolved to require two functionally distinct forms of the FtsZ protein and have lost elements of the Min machinery required for Z-ring placement. Here, we analyse the plastid division component accumulation and replication of chloroplasts 3 (ARC3) and show that ARC3 forms part of the stromal plastid division machinery. ARC3 interacts specifically with AtFtsZ1, acting as a Z-ring accessory protein and defining a unique function for this family of FtsZ proteins. ARC3 is involved in division site placement, suggesting that it might functionally replace MinC, representing an important advance in our understanding of the mechanism of chloroplast division and the evolution of the chloroplast division machinery. PMID:17304239

  1. 1. Oblique view of 215 Division Street, looking southwest, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Oblique view of 215 Division Street, looking southwest, showing front (east) facade and north side, 213 Division Street is visible at left and 217 Division Street appears at right - 215 Division Street (House), Rome, Floyd County, GA

  2. Instrumentation and Controls Division Overview: Sensors Development for Harsh Environments at Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Zeller, Mary V.; Lei, Jih-Fen

    2002-01-01

    The Instrumentation and Controls Division is responsible for planning, conducting and directing basic and applied research on advanced instrumentation and controls technologies for aerospace propulsion and power applications. The Division's advanced research in harsh environment sensors, high temperature high power electronics, MEMS (microelectromechanical systems), nanotechnology, high data rate optical instrumentation, active and intelligent controls, and health monitoring and management will enable self-feeling, self-thinking, self-reconfiguring and self-healing Aerospace Propulsion Systems. These research areas address Agency challenges to deliver aerospace systems with reduced size and weight, and increased functionality and intelligence for future NASA missions in advanced aeronautics, economical space transportation, and pioneering space exploration. The Division also actively supports educational and technology transfer activities aimed at benefiting all humankind.

  3. Massively parallel simulation with DOE's ASCI supercomputers : an overview of the Los Alamos Crestone project

    SciTech Connect

    Weaver, R. P.; Gittings, M. L.

    2004-01-01

    The Los Alamos Crestone Project is part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative, or ASCI Program. The main goal of this software development project is to investigate the use of continuous adaptive mesh refinement (CAMR) techniques for application to problems of interest to the Laboratory. There are many code development efforts in the Crestone Project, both unclassified and classified codes. In this overview I will discuss the unclassified SAGE and the RAGE codes. The SAGE (SAIC adaptive grid Eulerian) code is a one-, two-, and three-dimensional multimaterial Eulerian massively parallel hydrodynamics code for use in solving a variety of high-deformation flow problems. The RAGE CAMR code is built from the SAGE code by adding various radiation packages, improved setup utilities and graphics packages and is used for problems in which radiation transport of energy is important. The goal of these massively-parallel versions of the codes is to run extremely large problems in a reasonable amount of calendar time. Our target is scalable performance to {approx}10,000 processors on a 1 billion CAMR computational cell problem that requires hundreds of variables per cell, multiple physics packages (e.g. radiation and hydrodynamics), and implicit matrix solves for each cycle. A general description of the RAGE code has been published in [l],[ 2], [3] and [4]. Currently, the largest simulations we do are three-dimensional, using around 500 million computation cells and running for literally months of calendar time using {approx}2000 processors. Current ASCI platforms range from several 3-teraOPS supercomputers to one 12-teraOPS machine at Lawrence Livermore National Laboratory, the White machine, and one 20-teraOPS machine installed at Los Alamos, the Q machine. Each machine is a system comprised of many component parts that must perform in unity for the successful run of these simulations. Key features of any massively parallel system

  4. Structures Division 1994 Annual Report

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1994 are presented.

  5. NEN Division Funding Gap Analysis

    SciTech Connect

    Esch, Ernst I.; Goettee, Jeffrey D.; Desimone, David J.; Lakis, Rollin E.; Miko, David K.

    2012-09-05

    The work in NEN Division revolves around proliferation detection. The sponsor funding model seems to have shifted over the last decades. For the past three lustra, sponsors are mainly interested in funding ideas and detection systems that are already at a technical readiness level 6 (TRL 6 -- one step below an industrial prototype) or higher. Once this level is reached, the sponsoring agency is willing to fund the commercialization, implementation, and training for the systems (TRL 8, 9). These sponsors are looking for a fast turnaround (1-2 years) technology development efforts to implement technology. To support the critical national and international needs for nonprolifertion solutions, we have to maintain a fluent stream of subject matter expertise from the fundamental principals of radiation detection through prototype development all the way to the implementation and training of others. NEN Division has large funding gaps in the Valley of Death region. In the current competitive climate for nuclear nonproliferation projects, it is imminent to increase our lead in this field.

  6. Engineering Technology Division Long-Range Plan, 1991--1995

    SciTech Connect

    Not Available

    1995-01-01

    This Engineering Technology Division Long-Range Plan is a departure from planning processes of the past. About a year ago we decided to approach our strategic planning in a very different way. With this plan we complete the first phase of a comprehensive process that has involved most of the Division staff. Through a series of brainstorming''meetings, we have accumulated a wealth of ideas. By this process, we have been able to identify our perceived strengths and weaknesses and to propose very challenging goals for the future. Early on in our planning, we selected two distinct areas where we desire changes. First, we want to pursue program development in a much more structured and dynamic manner: deciding what we want to do, developing plans, and providing the resources to follow through. Second, we want to change the way that we do business by developing more effective ways to work together within the Division and with the important groups that we interact with throughout Energy Systems. These initiatives are reflected in the plan and in related actions that the Division is implementing. The ETD mission is to perform research, development, conceptual design, analysis, fabrication, testing, and system demonstration of technology essential for (1) nuclear reactor systems and related technologies (2) space and defense systems (3) advanced systems for energy conversion and utilization, and (4) water and waste management systems, and to foster a vigorous program of technology transfer using the best available techniques of technical infusion into the marketplace. In meeting this mission, the Division will institute a documented pollution prevention program, ensure that environmental impact statements are prepared for the supporting program, and adhere to all environmental safety and health requirements. 4 figs., 2 tabs.

  7. News | Division of Cancer Prevention

    Cancer.gov

    News about scientific advances in cancer prevention, program activities, and new projects are included here in NCI press releases and fact sheets, articles from the NCI Cancer Bulletin, and Clinical Trial News from the NCI website.

  8. Chemical Technology Division annual technical report, 1992

    SciTech Connect

    Battles, J.E.; Myles, K.M.; Laidler, J.J.; Green, D.W.

    1993-06-01

    In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous waste, mixed hazardous/radioactive waste, and municipal solid waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams, treating water contaminated with volatile organics, and concentrating radioactive waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (EFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials (corium; Fe-U-Zr, tritium in LiAlO{sub 2} in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel` ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, and molecular sieve structures; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  9. Kinetochore-microtubule interactions during cell division.

    PubMed

    Maiato, Helder; Sunkel, Claudio E

    2004-01-01

    Proper segregation of chromosomes during cell division is essential for the maintenance of genetic stability. During this process chromosomes must establish stable functional interactions with microtubules through the kinetochore, a specialized protein structure located on the surface of the centromeric heterochromatin. Stable attachment of kinetochores to a number of microtubules results in the formation of a kinetochore fibre that mediates chromosome movement. How the kinetochore fibre is formed and how chromosome motion is produced and regulated remain major questions in cell biology. Here we look at some of the history of research devoted to the study of kinetochore-microtubule interaction and attempt to identify significant advances in the knowledge of the basic processes. Ultrastructural work has provided substantial insights into the structure of the kinetochore and associated microtubules during different stages of mitosis. Also, recent in-vivo studies have probed deep into the dynamics of kinetochore-attached microtubules suggesting possible models for the way in which kinetochores harness the capacity of microtubules to do work and turn it into chromosome motion. Much of the research in recent years suggests that indeed multiple mechanisms are involved in both formation of the k-fibre and chromosome motion. Thus, rather than moving to a unified theory, it has become apparent that most cell types have the capacity to build the spindle using multiple and probably redundant mechanisms.

  10. Divisions Panel Discussion: Astronomy for Development

    NASA Astrophysics Data System (ADS)

    Govender, Kevin; Hemenway, Mary Kay; Wolter, Anna; Haghighipour, Nader; Yan, Yihua; van Dishoeck, E. F.; Silva, David; Guinan, Edward

    2016-10-01

    The main purpose of this panel discussion was to encourage conversation around potential collaborations between the IAU Office of Astronomy for Development (OAD) and IAU Divisions. The discussion was facilitated by the OAD and the conversation revolved mainly around two questions: (i) What should the OAD be doing to enhance the work of the Divisions? (ii) What could the Divisions (both members and respective scientific discipline in general) contribute towards the implementation of the IAU strategic plan?

  11. Major Programs | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention supports major scientific collaborations, research networks, investigator-initiated grants, postdoctoral training, and specialized resources across the United States. |

  12. Energy and Environmental Systems Division 1981 research review

    SciTech Connect

    Not Available

    1982-04-01

    To effectively manage the nation's energy and natural resources, government and industry leaders need accurate information regarding the performance and economics of advanced energy systems and the costs and benefits of public-sector initiatives. The Energy and Environmental Systems Division (EES) of Argonne National Laboratory conducts applied research and development programs that provide such information through systems analysis, geophysical field research, and engineering studies. During 1981, the division: analyzed the production economics of specific energy resources, such as biomass and tight sands gas; developed and transferred to industry economically efficient techniques for addressing energy-related resource management and environmental protection problems, such as the reclamation of strip-mined land; determined the engineering performance and cost of advanced energy-supply and pollution-control systems; analyzed future markets for district heating systems and other emerging energy technologies; determined, in strategic planning studies, the availability of resources needed for new energy technologies, such as the imported metals used in advanced electric-vehicle batteries; evaluated the effectiveness of strategies for reducing scarce-fuel consumption in the transportation sector; identified the costs and benefits of measures designed to stabilize the financial condition of US electric utilities; estimated the costs of nuclear reactor shutdowns and evaluated geologic conditions at potential sites for permanent underground storage of nuclear waste; evaluated the cost-effectiveness of environmental regulations, particularly those affecting coal combustion; and identified the environmental effects of energy technologies and transportation systems.

  13. Analytical Chemistry Division annual progress report for period ending December 31, 1988

    SciTech Connect

    Not Available

    1988-05-01

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: (1) Analytical Research, Development, and Implementation. The division maintains a program to conceptualize, investigate, develop, assess, improve, and implement advanced technology for chemical and physicochemical measurements. Emphasis is on problems and needs identified with ORNL and Department of Energy (DOE) programs; however, attention is also given to advancing the analytical sciences themselves. (2) Programmatic Research, Development, and Utilization. The division carries out a wide variety of chemical work that typically involves analytical research and/or development plus the utilization of analytical capabilities to expedite programmatic interests. (3) Technical Support. The division performs chemical and physicochemical analyses of virtually all types. The Analytical Chemistry Division is organized into four major sections, each of which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1988. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8.

  14. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  15. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  16. Top500 list's twice-yearly snapshots of world's fastest supercomputers develop into big picture of changing technology

    SciTech Connect

    Strohmaier, Erich

    2003-07-03

    Now in its 10th year, the twice-yearly TOP500 list of supercomputers serves as a ''Who's Who'' in the field of High Performance Computing (HPC). The TOP500 list was started in 1993 as a project to compile and publish twice a year a list of the most powerful supercomputers in the world. But it is more than just a ranking system and serves as major source of information to analyze trends in HPC. The list of manufacturers active in this market segment has changed continuously and quite dramatically during the 10 year history of this project. And while the architectures of the systems in the list have also seen a constant change, it turns out that the overall increase in the performance levels recorded is rather smooth and predictable. HPC performance levels grow exponentially. The most important single factor for this growth is - of course the increase of processor performance described by Moore's Law. However, the TOP500 list clearly illustrates that HPC performance has actually outpaced Moore's Law, due to the increasing processor numbers in HPC systems. On the other hand changes in computer architecture make it more and more of a challenge to achieve high performance efficiencies in the Linpack benchmark used to rank the 500 systems. With knowledge and effort, the Linpack benchmark can still be implemented in very efficient ways as recently demonstrated by a new implementation developed at the U.S. Department of Energy's National Energy Research Scientific Computing Center for their 6,656 processor IBM SP system.

  17. Accelerator and Fusion Research Division: summary of activities, 1983

    SciTech Connect

    Not Available

    1984-08-01

    The activities described in this summary of the Accelerator and Fusion Research Division are diverse, yet united by a common theme: it is our purpose to explore technologically advanced techniques for the production, acceleration, or transport of high-energy beams. These beams may be the heavy ions of interest in nuclear science, medical research, and heavy-ion inertial-confinement fusion; they may be beams of deuterium and hydrogen atoms, used to heat and confine plasmas in magnetic fusion experiments; they may be ultrahigh-energy protons for the next high-energy hadron collider; or they may be high-brilliance, highly coherent, picosecond pulses of synchrotron radiation.

  18. Isotope and Nuclear Chemistry Division annual report, FY 1983

    SciTech Connect

    Heiken, J.H.; Lindberg, H.A.

    1984-05-01

    This report describes progress in the major research and development programs carried out in FY 1983 by the Isotope and Nuclear Chemistry Division. It covers radiochemical diagnostics of weapons tests; weapons radiochemical diagnostics research and development; other unclassified weapons research; stable and radioactive isotope production, separation, and applications (including biomedical applications); element and isotope transport and fixation; actinide and transition metal chemistry; structural chemistry, spectroscopy, and applications; nuclear structure and reactions; irradiation facilities; advanced analytical techniques; development and applications; atmospheric chemistry and transport; and earth and planetary processes.

  19. 77 FR 37422 - National Center for Advancing Translational Sciences; Notice of Closed Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-21

    ... Panel; Division of Comparative Medicine Peer Review Meeting; Office of Research Infrastructure Programs... Center for Advancing Translational Sciences, National Institutes of Health, 6701 Democracy Blvd., Dem....

  20. Implementation of (omega)-k synthetic aperture radar imaging algorithm on a massively parallel supercomputer

    NASA Astrophysics Data System (ADS)

    Yerkes, Christopher R.; Webster, Eric D.

    1994-06-01

    Advanced algorithms for synthetic aperture radar (SAR) imaging have in the past required computing capabilities only available from high performance special purpose hardware. Such architectures have tended to have short life cycles with respect to development expense. Current generation Massively Parallel Processors (MPP) are offering high performance capabilities necessary for such applications with both a scalable architecture and a longer projected life cycle. In this paper we explore issues associated with implementation of a SAR imaging algorithm on a mesh configured MPP architecture.

  1. Advanced Information System Research Project.

    DTIC Science & Technology

    1980-06-01

    realistic near-term achievements. The research program objectives are to develop , manage , and coordinate activities relating to the following: o... development ; o Development and demonstration of tools, techniques, procedures, and advanced design concepts applicable to future management ... management is consolidated under the Division Property Book Officer. Property book accountability is maintained under the provisions of AR 735-35, and

  2. Materials Sciences Division 1990 annual report

    SciTech Connect

    Not Available

    1990-12-31

    This report is the Materials Sciences Division`s annual report. It contains abstracts describing materials research at the National Center for Electron Microscopy, and for research groups in metallurgy, solid-state physics, materials chemistry, electrochemical energy storage, electronic materials, surface science and catalysis, ceramic science, high tc superconductivity, polymers, composites, and high performance metals.

  3. Cognitive and Neural Sciences Division 1990 Programs.

    ERIC Educational Resources Information Center

    Vaughan, Willard S., Jr., Ed.

    Research and development efforts carried out under sponsorship of the Cognitive and Neural Sciences Division of the Office of Naval Research during fiscal year 1990 are described in this compilation of project description summaries. The Division's research is organized in three types of programs: (1) Cognitive Science (the human learner--cognitive…

  4. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders....

  5. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders....

  6. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders....

  7. Research Networks Map | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention supports major scientific collaborations and research networks at more than 100 sites across the United States. Five Major Programs' sites are shown on this map. | The Division of Cancer Prevention supports major scientific collaborations and research networks at more than 100 sites across the United States.

  8. New Study Designs | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention is expanding clinical research beyond standard trial designs to find interventions that may play a role in more than one prevalent disease. | The Division of Cancer Prevention is expanding clinical research beyond standard trial designs to find interventions that may play a role in more than one prevalent disease.

  9. Understanding Division of Fractions: An Alternative View

    ERIC Educational Resources Information Center

    Fredua-Kwarteng, E.; Ahia, Francis

    2006-01-01

    The purpose of this paper is to offer three alternatives to patterns or visualization used to justify division of fraction "algorithm" invert and multiply". The three main approaches are historical, similar denominators and algebraic, that teachers could use to justify the standard algorithm of division of fraction. The historical approach uses…

  10. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders....

  11. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders....

  12. Polarized Cell Division of Chlamydia trachomatis.

    PubMed

    Abdelrahman, Yasser; Ouellette, Scot P; Belland, Robert J; Cox, John V

    2016-08-01

    Bacterial cell division predominantly occurs by a highly conserved process, termed binary fission, that requires the bacterial homologue of tubulin, FtsZ. Other mechanisms of bacterial cell division that are independent of FtsZ are rare. Although the obligate intracellular human pathogen Chlamydia trachomatis, the leading bacterial cause of sexually transmitted infections and trachoma, lacks FtsZ, it has been assumed to divide by binary fission. We show here that Chlamydia divides by a polarized cell division process similar to the budding process of a subset of the Planctomycetes that also lack FtsZ. Prior to cell division, the major outer-membrane protein of Chlamydia is restricted to one pole of the cell, and the nascent daughter cell emerges from this pole by an asymmetric expansion of the membrane. Components of the chlamydial cell division machinery accumulate at the site of polar growth prior to the initiation of asymmetric membrane expansion and inhibitors that disrupt the polarity of C. trachomatis prevent cell division. The polarized cell division of C. trachomatis is the result of the unipolar growth and FtsZ-independent fission of this coccoid organism. This mechanism of cell division has not been documented in other human bacterial pathogens suggesting the potential for developing Chlamydia-specific therapeutic treatments.

  13. Cognitive and Neural Sciences Division, 1991 Programs.

    ERIC Educational Resources Information Center

    Vaughan, Willard S., Ed.

    This report documents research and development performed under the sponsorship of the Cognitive and Neural Sciences Division of the Office of Naval Research in fiscal year 1991. It provides abstracts (title, principal investigator, project code, objective, approach, progress, and related reports) of projects of three program divisions (cognitive…

  14. The Division of Labor as Social Interaction

    ERIC Educational Resources Information Center

    Freidson, Eliot

    1976-01-01

    Three different principles and ideologies by which the division of labor can be organized are sketched, along with their consequences for variation in structure and content. It is noted that the reality of the division of labor lies in the social interaction of its participants. (Author/AM)

  15. Teaching Cell Division: Basics and Recommendations.

    ERIC Educational Resources Information Center

    Smith, Mike U.; Kindfield, Ann C. H.

    1999-01-01

    Presents a concise overview of cell division that includes only the essential concepts necessary for understanding genetics and evolution. Makes recommendations based on published research and teaching experiences that can be used to judge the merits of potential activities and materials for teaching cell division. Makes suggestions regarding the…

  16. The Changing Nature of Division III Athletics

    ERIC Educational Resources Information Center

    Beaver, William

    2014-01-01

    Non-selective Division III institutions often face challenges in meeting their enrollment goals. To ensure their continued viability, these schools recruit large numbers of student athletes. As a result, when compared to FBS (Football Bowl Division) institutions these schools have a much higher percentage of student athletes on campus and a…

  17. "American Gothic" and the Division of Labor.

    ERIC Educational Resources Information Center

    Saunders, Robert J.

    1987-01-01

    Provides historical review of gender-based division of labor. Argues that gender-based division of labor served a purpose in survival of tribal communities but has lost meaning today and may be a handicap to full use of human talent and ability in the arts. There is nothing in various art forms which make them more appropriate for males or…

  18. The physiology of bacterial cell division.

    PubMed

    Egan, Alexander J F; Vollmer, Waldemar

    2013-01-01

    Bacterial cell division is facilitated by the divisome, a dynamic multiprotein assembly localizing at mid-cell to synthesize the stress-bearing peptidoglycan and to constrict all cell envelope layers. Divisome assembly occurs in two steps and involves multiple interactions between more than 20 essential and accessory cell division proteins. Well before constriction and while the cell is still elongating, the tubulin-like FtsZ and early cell division proteins form a ring-like structure at mid-cell. Cell division starts once certain peptidoglycan enzymes and their activators have moved to the FtsZ-ring. Gram-negative bacteria like Escherichia coli simultaneously synthesize and cleave the septum peptidoglycan during division leading to a constriction. The outer membrane constricts together with the peptidoglycan layer with the help of the transenvelope spanning Tol-Pal system.

  19. Gravity and the orientation of cell division

    NASA Technical Reports Server (NTRS)

    Helmstetter, C. E.

    1997-01-01

    A novel culture system for mammalian cells was used to investigate division orientations in populations of Chinese hamster ovary cells and the influence of gravity on the positioning of division axes. The cells were tethered to adhesive sites, smaller in diameter than a newborn cell, distributed over a nonadhesive substrate positioned vertically. The cells grew and divided while attached to the sites, and the angles and directions of elongation during anaphase, projected in the vertical plane, were found to be random with respect to gravity. However, consecutive divisions of individual cells were generally along the same axis or at 90 degrees to the previous division, with equal probability. Thus, successive divisions were restricted to orthogonal planes, but the choice of plane appeared to be random, unlike the ordered sequence of cleavage orientations seen during early embryo development.

  20. Advanced stellarators

    NASA Astrophysics Data System (ADS)

    Schlüter, Arnulf

    1983-03-01

    Toroidal confinement of a plasma by an external magnetic field is not compatible with axisymmetry, in contrast to confinement by the pinch effect of induced electric currents as in a tokomak or by the reversed field pinch configuration. The existence of magnetic surfaces throughout the region in which grad p ≠ 0 is therefore not guaranteed in such configurations, though it is necessary for MHD-equilibrium when the lines of force possess a finite twist (or "rotational transform"). These twisted equilibria are called stellarators. The other type of external confinement requires all lines of force to be closed upon themselves and p to be function of the well defined quantity Q = φ d l/ B only. The resulting "bumpy" tori are sometimes also referred to as being M + S like. By discussing specific examples it is shown that stellarator configurations exist which retain as much as possible the properties of M + S like configurations, combine these with the magnetic well, and with an approximation to the isodynamic requirement of D. Palumbo. These so-called Advanced Stellarators shown an improvement in predicted particle confinement and beta-limit compared to the classical stellarators. They can also be viewed as forming a system of linked stabilized mirrors of small mirror ratio. These fields can be produced by modular coils. A prototype of such a configuration is being designed by the stellarator division of IPP under the name of Wendelstein VII-AS. Expected physical data and technical details of W VII-AS are given.

  1. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    SciTech Connect

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations that

  2. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    NASA Astrophysics Data System (ADS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU-GPU collaborative simulations that

  3. 49 CFR 1242.03 - Made by accounting divisions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 9 2012-10-01 2012-10-01 false Made by accounting divisions. 1242.03 Section 1242... accounting divisions. The separation shall be made by accounting divisions, where such divisions are maintained, and the aggregate of the accounting divisions reported for the quarter and for the year....

  4. 49 CFR 1242.03 - Made by accounting divisions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Made by accounting divisions. 1242.03 Section 1242... accounting divisions. The separation shall be made by accounting divisions, where such divisions are maintained, and the aggregate of the accounting divisions reported for the quarter and for the year....

  5. Life Sciences Division progress report for CYs 1997-1998 [Oak Ridge National Laboratory

    SciTech Connect

    Mann, Reinhold C.

    1999-06-01

    mission of the division is to advance science and technology to understand complex biological systems and their relationship with human health and the environment.

  6. Geolocation of LTE Subscriber Stations Based on the Timing Advance Ranging Parameter

    DTIC Science & Technology

    2010-12-01

    timing advance range rings (From [8]). ..................6 Figure 3. Functional commonality between SC- FDMA and OFDMA signal chains (From [14... FDMA Single Carrier Frequency Division Multiple Access SDU Service Data Unit SS Subscriber Station SVD Singular Value Decomposition TA...uplink and downlink. They are single-carrier frequency-division multiple access (SC- FDMA ) and orthogonal frequency-division multiple access (OFDMA

  7. Water Reactor Safety Research Division. Quarterly progress report, April 1-June 30, 1980

    SciTech Connect

    Abuaf, N.; Levine, M.M.; Saha, P.; van Rooyen, D.

    1980-08-01

    The Water Reactor Safety Research Programs quarterly report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: LWR Thermal Hydraulic Development, Advanced Code Evlauation, TRAC Code Assessment, and Stress Corrosion Cracking of PWR Steam Generator Tubing.

  8. 24 CFR 4.34 - Review of Inspector General's report by the Ethics Law Division.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Review of Inspector General's report by the Ethics Law Division. 4.34 Section 4.34 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development HUD REFORM ACT Prohibition of Advance Disclosure...

  9. 24 CFR 4.36 - Action by the Ethics Law Division.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Action by the Ethics Law Division. 4.36 Section 4.36 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development HUD REFORM ACT Prohibition of Advance Disclosure of Funding Decisions § 4.36 Action...

  10. Alaska climate divisions based on objective methods

    NASA Astrophysics Data System (ADS)

    Angeloff, H.; Bieniek, P. A.; Bhatt, U. S.; Thoman, R.; Walsh, J. E.; Daly, C.; Shulski, M.

    2010-12-01

    Alaska is vast geographically, is located at high latitudes, is surrounded on three sides by oceans and has complex topography, encompassing several climate regions. While climate zones exist, there has not been an objective analysis to identify regions of homogeneous climate. In this study we use cluster analysis on a robust set of weather observation stations in Alaska to develop climate divisions for the state. Similar procedures have been employed in the contiguous United States and other parts of the world. Our analysis, based on temperature and precipitation, yielded a set of 10 preliminary climate divisions. These divisions include an eastern and western Arctic (bounded by the Brooks Range to the south), a west coast region along the Bering Sea, and eastern and western Interior regions (bounded to the south by the Alaska Range). South of the Alaska Range there were the following divisions: an area around Cook Inlet (also including Valdez), coastal and inland areas along Bristol Bay including Kodiak and Lake Iliamna, the Aleutians, and Southeast Alaska. To validate the climate divisions based on relatively sparse station data, additional sensitivity analysis was performed. Additional clustering analysis utilizing the gridded North American Regional Reanalysis (NARR) was also conducted. In addition, the divisions were evaluated using correlation analysis. These sensitivity tests support the climate divisions based on cluster analysis.

  11. Physics Division annual report, April 1, 1995--March 31, 1996

    SciTech Connect

    Thayer, K.J.

    1996-11-01

    The past year has seen several major advances in the Division`s research programs. In heavy-ion physics these include experiments with radioactive beams of interest to nuclear astrophysics, a first exploration of the structure of nuclei situated beyond the proton drip line, the discovery of new proton emitters--the heaviest known, the first unambiguous detection of discrete linking transitions between superdeformed and normal deformed states, and the impact of the APEX results which were the first to report, conclusively, no sign of the previously reported sharp electron positron sum lines. The medium energy nuclear physics program of the Division has led the first round of experiments at the CEBAF accelerator at the Thomas Jefferson National Accelerator Facility and the study of color transparency in rho meson propagation at the HERMES experiment at DESY, and it has established nuclear polarization in a laser driven polarized hydrogen target. In atomic physics, the non-dipolar contribution to photoionization has been quantitatively established for the first time, the atomic physics beamline at the Argonne 7 GeV Advanced Photon Source was constructed and, by now, first experiments have been successfully performed. The theory program has pushed exact many-body calculations with fully realistic interactions (the Argonne v{sub 18} potential) to the seven-nucleon system, and interesting results have been obtained for the structure of deformed nuclei through meanfield calculations and for the structure of baryons with QCD calculations based on the Dyson-Schwinger approach. Brief summaries are given of the individual research programs.

  12. 622-Mbps Orthogonal Frequency Division Multiplexing Modulator Developed

    NASA Technical Reports Server (NTRS)

    Nguyen, Na T.

    1999-01-01

    The Communications Technology Division at the NASA Lewis Research Center is developing advanced electronic technologies for the space communications and remote sensing systems of tomorrow. As part of the continuing effort to advance the state-of-the art in satellite communications and remote sensing systems, Lewis is developing a programmable Orthogonal Frequency Division Multiplexing (OFDM) modulator card for high-data-rate communication links. The OFDM modulator is particularly suited to high data-rate downlinks to ground terminals or direct data downlinks from near-Earth science platforms. It can support data rates up to 622 megabits per second (Mbps) and high-order modulation schemes such as 16-ary quadrature amplitude modulation (16-ary QAM) or 8- phase shift keying (8PSK). High order modulations can obtain the bandwidth efficiency over the traditional binary phase shift keying (BPSK) or quadrature phase shift keying (QPSK) modulator schemes. The OFDM modulator architecture can also be precompensated for channel disturbances and alleviate amplitude degradations caused by nonlinear transponder characteristics.

  13. Cognitive and Neural Sciences Division 1990 Programs

    DTIC Science & Technology

    1990-08-01

    f D-/a33 773 ! COGNITIVE AND NEURAL SCIENCES -DIVISION 1990 PROGRAMS P .. i I’ • . M,’AR ’ OFFICE OF NAVAL RESEARCH 800 NORTH QUINCY STREET ARLINGTON... Cognitive and Neural Sciences Division 1990 Programs PE 61153N * 6. AUTHOR(S)I Edited by W-illard S. Vaughan 7. PERFORMING ORGANIZATION NAME(S) AND...NOTES This is a compilation of abstracts representing R&D sponsured by the ONR Cognitive and Neural Sciences Division. 12a. DISTRIBUTION AVAILABILITY

  14. Overview of the Applied Aerodynamics Division

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A major reorganization of the Aeronautics Directorate of the Langley Research Center occurred in early 1989. As a result of this reorganization, the scope of research in the Applied Aeronautics Division is now quite different than that in the past. An overview of the current organization, mission, and facilities of this division is presented. A summary of current research programs and sample highlights of recent research are also presented. This is intended to provide a general view of the scope and capabilities of the division.

  15. History of the Fluids Engineering Division

    DOE PAGES

    Cooper, Paul; Martin, C. Samuel; O'Hern, Timothy J.

    2016-08-03

    The 90th Anniversary of the Fluids Engineering Division (FED) of ASME will be celebrated on July 10–14, 2016 in Washington, DC. The venue is ASME's Summer Heat Transfer Conference (SHTC), Fluids Engineering Division Summer Meeting (FEDSM), and International Conference on Nanochannels and Microchannels (ICNMM). The occasion is an opportune time to celebrate and reflect on the origin of FED and its predecessor—the Hydraulic Division (HYD), which existed from 1926–1963. Furthermore, the FED Executive Committee decided that it would be appropriate to publish concurrently a history of the HYD/FED.

  16. Biology and Medicine Division: Annual report 1986

    SciTech Connect

    Not Available

    1987-04-01

    The Biology and Medicine Division continues to make important contributions in scientific areas in which it has a long-established leadership role. For 50 years the Division has pioneered in the application of radioisotopes and charged particles to biology and medicine. There is a growing emphasis on cellular and molecular applications in the work of all the Division's research groups. The powerful tools of genetic engineering, the use of recombinant products, the analytical application of DNA probes, and the use of restriction fragment length polymorphic DNA are described and proposed for increasing use in the future.

  17. Earth Sciences Division collected abstracts: 1979

    SciTech Connect

    Henry, A.L.; Schwartz, L.L.

    1980-04-30

    This report is a compilation of abstracts of papers, internal reports, and talks presented during 1979 at national and international meetings by members of the Earth Sciences Division, Lawrence Livermore Laboratory. The arrangement is alphabetical (by author). For a given report, a bibliographic reference appears under the name of each coauthor, but the abstract iself is given only under the name of the first author or the first Earth Sciences Division author. A topical index at the end of the report provides useful cross references, while indicating major areas of research interest in the Earth Sciences Division.

  18. Asymmetric stem cell division: lessons from Drosophila.

    PubMed

    Wu, Pao-Shu; Egger, Boris; Brand, Andrea H

    2008-06-01

    Asymmetric cell division is an important and conserved strategy in the generation of cellular diversity during animal development. Many of our insights into the underlying mechanisms of asymmetric cell division have been gained from Drosophila, including the establishment of polarity, orientation of mitotic spindles and segregation of cell fate determinants. Recent studies are also beginning to reveal the connection between the misregulation of asymmetric cell division and cancer. What we are learning from Drosophila as a model system has implication both for stem cell biology and also cancer research.

  19. Chemical and Laser Sciences Division annual report 1989

    SciTech Connect

    Haines, N.

    1990-06-01

    The Chemical and Laser Sciences Division Annual Report includes articles describing representative research and development activities within the Division, as well as major programs to which the Division makes significant contributions.

  20. 6. Contextual view of Fairbanks Company, looking south along Division ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Contextual view of Fairbanks Company, looking south along Division Street, showing relationship of factory to surrounding area, 213, 215, & 217 Division Street appear on right side of street - Fairbanks Company, 202 Division Street, Rome, Floyd County, GA

  1. 3. Oblique view of 215 Division Street, looking southeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Oblique view of 215 Division Street, looking southeast, showing rear (west) facade and north side, Fairbanks Company appears at left and 215 Division Street is visible at right - 215 Division Street (House), Rome, Floyd County, GA

  2. 2. Oblique view of 215 Division Street, looking northeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Oblique view of 215 Division Street, looking northeast, showing rear (west) facade and south side, 217 Division Street is visible at left and Fairbanks Company appears at right - 215 Division Street (House), Rome, Floyd County, GA

  3. 3. Oblique view of 213 Division Street, looking northeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Oblique view of 213 Division Street, looking northeast, showing rear (west) facade and south side, 215 Division Street is visible at left and Fairbanks Company appears at right - 213 Division Street (House), Rome, Floyd County, GA

  4. Energy Division annual progress report for period ending September 30, 1991

    SciTech Connect

    Stone, J.N.

    1992-04-01

    The Energy Division is one of 17 research divisions at Oak Ridge Laboratory. Its goals and accomplishments are described in this annual progress report for FY 1991. The division`s total expenditures in FY 1991 were $39.1 million. The work is supported by the US Department of Energy, US Department of Defense, many other federal agencies, and some private organizations. Disciplines of the 124 technical staff members include engineering, social sciences, physical and life sciences, and mathematics and statistics. The Energy Division`s programmatic activities focus on three major areas: (1) analysis and assessment, (2) energy conservation technologies, and (3) military transportation systems. Analysis and assessment activities cover energy and resource analysis, the preparation of environmental assessments and impact statements, research on waste management, analysis of emergency preparedness for natural and technological disasters, analysis of the energy and environmental needs of developing countries, technology transfer, and analysis of civilian transportation. Energy conservation technologies include electric power systems, building equipment (thermally activated heat pumps, advanced refrigeration systems, novel cycles), building envelopes (walls, foundations, roofs, attics, and materials), and technical issues for improving energy efficiency in existing buildings. Military transportation systems concentrate on research for sponsors within the US military on improving the efficiency of military deployment, scheduling, and transportation coordination.

  5. WESTERN ECOLOGY DIVISION - GENERAL INFORMATION SHEET

    EPA Science Inventory

    abstract for flyer - general information The Western Ecology Division (WED), part of EPAs National Health and Environmental Effects Research Laboratory, provides information to EPA offices and regions nationwide to improve understanding of how human activities affect estuarine,...

  6. Chemical Sciences Division: Annual report 1992

    SciTech Connect

    Not Available

    1993-10-01

    The Chemical Sciences Division (CSD) is one of twelve research Divisions of the Lawrence Berkeley Laboratory, a Department of Energy National Laboratory. The CSD is composed of individual groups and research programs that are organized into five scientific areas: Chemical Physics, Inorganic/Organometallic Chemistry, Actinide Chemistry, Atomic Physics, and Physical Chemistry. This report describes progress by the CSD for 1992. Also included are remarks by the Division Director, a description of work for others (United States Office of Naval Research), and appendices of the Division personnel and an index of investigators. Research reports are grouped as Fundamental Interactions (Photochemical and Radiation Sciences, Chemical Physics, Atomic Physics) or Processes and Techniques (Chemical Energy, Heavy-Element Chemistry, and Chemical Engineering Sciences).

  7. About DCP | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) is the primary unit of the National Cancer Institute devoted to cancer prevention research. DCP provides funding and administrative support to clinical and laboratory researchers, community and multidisciplinary teams, and collaborative scientific networks. |

  8. Nanoengineering: Super symmetry in cell division

    NASA Astrophysics Data System (ADS)

    Huang, Kerwyn Casey

    2015-08-01

    Bacterial cells can be sculpted into different shapes using nanofabricated chambers and then used to explore the spatial adaptation of protein oscillations that play an important role in cell division.

  9. [Diagnosticum of abnormalities of plant meiotic division].

    PubMed

    Shamina, N V

    2006-01-01

    Abnormalities of plant meiotic division leading to abnormal meiotic products are summarized schematically in the paper. Causes of formation of monads, abnormal diads, triads, pentads, polyads, etc. have been observed in meiosis with both successive and simultaneous cytokinesis.

  10. 3. Perspective view of Express Building looking northeast, with Division ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Perspective view of Express Building looking northeast, with Division Street in foreground - American Railway Express Company Freight Building, 1060 Northeast Division Street, Bend, Deschutes County, OR

  11. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  12. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  13. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  14. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  15. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  16. Contacts in the Office of Pesticide Programs, Registration Division

    EPA Pesticide Factsheets

    The Registration Division (RD) is responsible product registrations, amendments, registrations, tolerances, experimental use permits, and emergency exemptions for conventional chemical pesticides. Find contacts in this division.

  17. Plastid Division: Evolution, Mechanism and Complexity

    PubMed Central

    Maple, Jodi; Møller, Simon Geir

    2007-01-01

    Background The continuity of chloroplasts is maintained by division of pre-existing chloroplasts. Chloroplasts originated as bacterial endosymbionts; however, the majority of bacterial division factors are absent from chloroplasts and the eukaryotic host has added several new components. For example, the ftsZ gene has been duplicated and modified, and the Min system has retained MinE and MinD but lost MinC, acquiring at least one new component ARC3. Further, the mechanism has evolved to include two members of the dynamin protein family, ARC5 and FZL, and plastid-dividing (PD) rings were most probably added by the eukaryotic host. Scope Deciphering how the division of plastids is coordinated and controlled by nuclear-encoded factors is key to our understanding of this important biological process. Through a number of molecular-genetic and biochemical approaches, it is evident that FtsZ initiates plastid division where the coordinated action of MinD and MinE ensures correct FtsZ (Z)-ring placement. Although the classical FtsZ antagonist MinC does not exist in plants, ARC3 may fulfil this role. Together with other prokaryotic-derived proteins such as ARC6 and GC1 and key eukaryotic-derived proteins such as ARC5 and FZL, these proteins make up a sophisticated division machinery. The regulation of plastid division in a cellular context is largely unknown; however, recent microarray data shed light on this. Here the current understanding of the mechanism of chloroplast division in higher plants is reviewed with an emphasis on how recent findings are beginning to shape our understanding of the function and evolution of the components. Conclusions Extrapolation from the mechanism of bacterial cell division provides valuable clues as to how the chloroplast division process is achieved in plant cells. However, it is becoming increasingly clear that the highly regulated mechanism of plastid division within the host cell has led to the evolution of features unique to the

  18. Nuclear Science Division: 1993 Annual report

    SciTech Connect

    Myers, W.D.

    1994-06-01

    This report describes the activities of the Nuclear Science Division for the 1993 calendar year. This was another significant year in the history of the Division with many interesting and important accomplishments. Activities for the following programs are covered here: (1) nuclear structure and reactions program; (2) the Institute for Nuclear and Particle Astrophysics; (3) relativistic nuclear collisions program; (4) nuclear theory program; (5) nuclear data evaluation program, isotope project; and (6) 88-inch cyclotron operations.

  19. Earth Sciences Division collected abstracts: 1980

    SciTech Connect

    Henry, A.L.; Hornady, B.F.

    1981-10-15

    This report is a compilation of abstracts of papers, reports, and talks presented during 1980 at national and international meetings by members of the Earth Sciences Division, Lawrence Livermore National Laboratory. The arrangement is alphabetical (by author). For a given report, a bibliographic reference appears under the name of each coauthor, but the abstract itself is given only under the name of the first author (indicated in capital letters) or the first Earth Sciences Division author.

  20. Earth Sciences Division annual report 1989

    SciTech Connect

    Not Available

    1990-06-01

    This Annual Report presents summaries of selected representative research activities from Lawrence Berkeley Laboratory grouped according to the principal disciplines of the Earth Sciences Division: Reservoir Engineering and Hydrology, Geology and Geochemistry, and Geophysics and Geomechanics. We are proud to be able to bring you this report, which we hope will convey not only a description of the Division's scientific activities but also a sense of the enthusiasm and excitement present today in the Earth Sciences.

  1. Friday's Agenda | Division of Cancer Prevention

    Cancer.gov

    TimeAgenda8:00 am - 8:10 amWelcome and Opening RemarksLeslie Ford, MDAssociate Director for Clinical ResearchDivision of Cancer Prevention, NCIEva Szabo, MD Chief, Lung and Upper Aerodigestive Cancer Research GroupDivision of Cancer Prevention, NCI8:10 am - 8:40 amClinical Trials Statistical Concepts for Non-StatisticiansKevin Dodd, PhD |

  2. The fencing problem and Coleochaete cell division.

    PubMed

    Wang, Yuandi; Dou, Mingya; Zhou, Zhigang

    2015-03-01

    The findings in this study suggest that the solution of a boundary value problem for differential equation system can be used to discuss the fencing problem in mathematics and Coleochaete, a green algae, cell division. This differential equation model in parametric expression is used to simulate the two kinds of cell division process, one is for the usual case and the case with a "dead" daughter cell.

  3. Weapons Experiments Division Explosives Operations Overview

    SciTech Connect

    Laintz, Kenneth E.

    2012-06-19

    Presentation covers WX Division programmatic operations with a focus on JOWOG-9 interests. A brief look at DARHT is followed by a high level overview of explosives research activities currently being conducted within in the experimental groups of WX-Division. Presentation covers more emphasis of activities and facilities at TA-9 as these efforts have been more traditionally aligned with ongoing collaborative explosive exchanges covered under JOWOG-9.

  4. Division II: Commission 10: Solar Activity

    NASA Astrophysics Data System (ADS)

    van Driel-Gesztelyi, Lidia; Scrijver, Karel J.; Klimchuk, James A.; Charbonneau, Paul; Fletcher, Lyndsay; Hasan, S. Sirajul; Hudson, Hugh S.; Kusano, Kanya; Mandrini, Cristina H.; Peter, Hardi; Vršnak, Bojan; Yan, Yihua

    2015-08-01

    The Business Meeting of Commission 10 was held as part of the Business Meeting of Division II (Sun and Heliosphere), chaired by Valentin Martínez-Pillet, the President of the Division. The President of Commission 10 (C10; Solar activity), Lidia van Driel-Gesztelyi, took the chair for the business meeting of C10. She summarised the activities of C10 over the triennium and the election of the incoming OC.

  5. Peroxisome division and proliferation in plants.

    PubMed

    Aung, Kyaw; Zhang, Xinchun; Hu, Jianping

    2010-06-01

    Peroxisomes are eukaryotic organelles with crucial functions in development. Plant peroxisomes participate in various metabolic processes, some of which are co-operated by peroxisomes and other organelles, such as mitochondria and chloroplasts. Defining the complete picture of how these essential organelles divide and proliferate will be instrumental in understanding how the dynamics of peroxisome abundance contribute to changes in plant physiology and development. Research in Arabidopsis thaliana has identified several evolutionarily conserved major components of the peroxisome division machinery, including five isoforms of PEROXIN11 proteins (PEX11), two dynamin-related proteins (DRP3A and DRP3B) and two FISSION1 proteins (FIS1A/BIGYIN and FIS1B). Recent studies in our laboratory have also begun to uncover plant-specific factors. DRP5B is a dual-localized protein that is involved in the division of both chloroplasts and peroxisomes, representing an invention of the plant/algal lineage in organelle division. In addition, PMD1 (peroxisomal and mitochondrial division 1) is a plant-specific protein tail anchored to the outer surface of peroxisomes and mitochondria, mediating the division and/or positioning of these organelles. Lastly, light induces peroxisome proliferation in dark-grown Arabidopsis seedlings, at least in part, through activating the PEX11b gene. The far-red light receptor phyA (phytochrome A) and the transcription factor HYH (HY5 homologue) are key components in this signalling pathway. In summary, pathways for the division and proliferation of plant peroxisomes are composed of conserved and plant-specific factors. The sharing of division proteins by peroxisomes, mitochondria and chloroplasts is also suggesting possible co-ordination in the division of these metabolically associated plant organelles.

  6. Medical Sciences Division report for 1993

    SciTech Connect

    Not Available

    1993-12-31

    This year`s Medical Sciences Division (MSD) Report is organized to show how programs in our division contribute to the core competencies of Oak Ridge Institute for Science and Education (ORISE). ORISE`s core competencies in education and training, environmental and safety evaluation and analysis, occupational and environmental health, and enabling research support the overall mission of the US Department of Energy (DOE).

  7. Applications of supercomputers in engineering: Fluid flow and stress analysis applications; Proceedings of the First International Conference, University of Southampton, England, Sept. 5-7, 1989. Volume 2

    NASA Astrophysics Data System (ADS)

    Brebbia, Carlos Alberto; Peters, Alexander

    The application of different numerical techniques to the solution of a large variety of engineering applications on supercomputers is examined. Problems related to the impact of the novel architectures on the performance of computer programs are addressed. Future trends in the development of new software tools for the solution of large industrial problems are discussed. Attention is focused on the use of supercomputers for the solving Navier-Stokes equations, computational aerodynamics on highly parallel processors, parallel numerical simulation for supersonic flows, and the vectorization of a boundary element code. Structural and stress-analysis applications are considered, and emphasis is placed on a large-scale vectorized three-dimensional geometric nonlinear shell analysis, vector and parallel numerical integration scheme for structural dynamics, large-scale structural design optimization, and dynamic analysis on parallel computers using a fast parallel algorithm.

  8. Structure, function and controls in microbial division.

    PubMed

    Vicente, M; Errington, J

    1996-04-01

    Several crucial genes required for bacterial division lie close together in a region called the dcw cluster. Within the cluster, gene expression is subject to complex transcriptional regulation, which serves to adjust the cell cycle in response to growth rate. The pivotally important FtsZ protein, which is needed to initiate division, is now known to interact with many other components of the division machinery in Escherichia coli. Some biochemical properties of FtsZ, and of another division protein called FtsA, suggest that they are similar to the eukaryotic proteins tubulin and actin respectively. Cell division needs to be closely co-ordinated with chromosome partitioning. The mechanism of partitioning is poorly understood, though several genes involved in this process, including several muk genes, have been identified. The min genes may participate in both septum positioning and chromosome partitioning. Coupled transcription and translation of membrane-associated proteins might also be important for partitioning. In the event of a failure in the normal partitioning process, Bacillus subtilis, at least, has a mechanism for removing a bisected nucleoid from the division septum.

  9. National Security Report: Background and Perspective on Important National Security and Defense Policy Issues. Volume 2, Issue 2, April 1998. Sales or Security? Supercomputers and Export Controls

    DTIC Science & Technology

    1998-04-01

    2 Chairman, House National Security Committee April 1998 Fromthe Chairman- Sales or Security? Supercomputers and Export Controls ilk)9o oil V e... military purposes, in Rus- fltJ Administatiii’ý’iclaxation of ers were inappropriately shipped without sia, China, and other countries ofprolifera- erc i...exuit controls. Under the required export licenses to military -re- tion concern. This shifted the burden of ~t i laxedpolicy the Administation did

  10. Distributed Processing of PIV images with a low power cluster supercomputer

    NASA Astrophysics Data System (ADS)

    Smith, Barton; Horne, Kyle; Hauser, Thomas

    2007-11-01

    Recent advances in digital photography and solid-state lasers make it possible to acquire images at up to 3000 frames per second. However, as the ability to acquire large samples very quickly has been realized, processing speed has not kept pace. A 2-D Particle Image Velocimetry (PIV) acquisition computer would require over five hours to process the data that can be acquired in one second with a Time-resolved Stereo PIV (TRSPIV) system. To decrease the computational time, parallel processing using a Beowulf cluster has been applied. At USU we have developed a low-power Beowulf cluster integrated with the data acquisition system of a TRSPIV system. This approach of integrating the PIV system and the Beowulf cluster eliminates the communication time, thus speeding up the process. In addition to improving the practicality of TRSPIV, this system will also be useful to researchers performing any PIV measurement where a large number of samples are required. Our presentation will describe the hardware and software implementation of our approach.

  11. Scheduling and routing algorithm for aggregating large data files from distributed databases to super-computers on lambda grid

    NASA Astrophysics Data System (ADS)

    Sun, Shen; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    These days while the traditional Internet cannot meet the requirement of data-intensive communications in large scale escience grid applications, Optical network which is also referred to as Lambda Grid provide a simple means of achieving guaranteed high bandwidth, guaranteed latency and deterministic connection. Lots of e-science applications like e-VLBI and GTL require aggregating several hundred GB data files from distributed databases to super-computers frequently at real time. Thus minimizing the aggregation time can improve the overall system performance. We consider the problem of aggregating large data files from distributed databases to distributed computational resources on lambda grid. We modify the model of Time-Path Scheduling Problem (TPSP) which has been proposed and propose a new N-destination TPSP (NDTPSP) model. We present the proof of NDTPSP's NP-completeness. We also propose a list scheduling algorithm and a modified list scheduling algorithm for our problem. The performance of different algorithms will be compared and analyzed by simulations.

  12. Scaling Time Warp-based Discrete Event Execution to 104 Processors on Blue Gene Supercomputer

    SciTech Connect

    Perumalla, Kalyan S

    2007-01-01

    Lately, important large-scale simulation applications, such as emergency/event planning and response, are emerging that are based on discrete event models. The applications are characterized by their scale (several millions of simulated entities), their fine-grained nature of computation (microseconds per event), and their highly dynamic inter-entity event interactions. The desired scale and speed together call for highly scalable parallel discrete event simulation (PDES) engines. However, few such parallel engines have been designed or tested on platforms with thousands of processors. Here an overview is given of a unique PDES engine that has been designed to support Time Warp-style optimistic parallel execution as well as a more generalized mixed, optimistic-conservative synchronization. The engine is designed to run on massively parallel architectures with minimal overheads. A performance study of the engine is presented, including the first results to date of PDES benchmarks demonstrating scalability to as many as 16,384 processors, on an IBM Blue Gene supercomputer. The results show, for the first time, the promise of effectively sustaining very large scale discrete event execution on up to 104 processors.

  13. Madden–Julian Oscillation prediction skill of a new-generation global model demonstrated using a supercomputer

    PubMed Central

    Miyakawa, Tomoki; Satoh, Masaki; Miura, Hiroaki; Tomita, Hirofumi; Yashiro, Hisashi; Noda, Akira T.; Yamada, Yohei; Kodama, Chihiro; Kimoto, Masahide; Yoneyama, Kunio

    2014-01-01

    Global cloud/cloud system-resolving models are perceived to perform well in the prediction of the Madden–Julian Oscillation (MJO), a huge eastward -propagating atmospheric pulse that dominates intraseasonal variation of the tropics and affects the entire globe. However, owing to model complexity, detailed analysis is limited by computational power. Here we carry out a simulation series using a recently developed supercomputer, which enables the statistical evaluation of the MJO prediction skill of a costly new-generation model in a manner similar to operational forecast models. We estimate the current MJO predictability of the model as 27 days by conducting simulations including all winter MJO cases identified during 2003–2012. The simulated precipitation patterns associated with different MJO phases compare well with observations. An MJO case captured in a recent intensive observation is also well reproduced. Our results reveal that the global cloud-resolving approach is effective in understanding the MJO and in providing month-long tropical forecasts. PMID:24801254

  14. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    SciTech Connect

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  15. Waveform inversion for 3-D earth structure using the Direct Solution Method implemented on vector-parallel supercomputer

    NASA Astrophysics Data System (ADS)

    Hara, Tatsuhiko

    2004-08-01

    We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.

  16. Solid State Division progress report for period ending September 30, 1993

    SciTech Connect

    Green, P.H.; Hinton, L.W.

    1994-08-01

    This report covers research progress in the Solid State Division from April 1, 1992, to September 30, 1993. During this period, the division conducted a broad, interdisciplinary materials research program with emphasis on theoretical solid state physics, neutron scattering, synthesis and characterization of materials, ion beam and laser processing, and the structure of solids and surfaces. This research effort was enhanced by new capabilities in atomic-scale materials characterization, new emphasis on the synthesis and processing of materials, and increased partnering with industry and universities. The theoretical effort included a broad range of analytical studies, as well as a new emphasis on numerical simulation stimulated by advances in high-performance computing and by strong interest in related division experimental programs. Superconductivity research continued to advance on a broad front from fundamental mechanisms of high-temperature superconductivity to the development of new materials and processing techniques. The Neutron Scattering Program was characterized by a strong scientific user program and growing diversity represented by new initiatives in complex fluids and residual stress. The national emphasis on materials synthesis and processing was mirrored in division research programs in thin-film processing, surface modification, and crystal growth. Research on advanced processing techniques such as laser ablation, ion implantation, and plasma processing was complemented by strong programs in the characterization of materials and surfaces including ultrahigh resolution scanning transmission electron microscopy, atomic-resolution chemical analysis, synchrotron x-ray research, and scanning tunneling microscopy.

  17. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  18. Environmental Sciences Division. Annual progress report for period ending September 30, 1980. [Lead abstract

    SciTech Connect

    Auerbach, S.I.; Reichle, D.E.

    1981-03-01

    Research conducted in the Environmental Sciences Division for the Fiscal Year 1980 included studies carried out in the following Division programs and sections: (1) Advanced Fossil Energy Program, (2) Nuclear Program, (3) Environmental Impact Program, (4) Ecosystem Studies Program, (5) Low-Level Waste Research and Development Program, (6) National Low-Level Waste Program, (7) Aquatic Ecology Section, (8) Environmental Resources Section, (9) Earth Sciences Section, and (10) Terrestrial Ecology Section. In addition, Educational Activities and the dedication of the Oak Ridge National Environmental Research Park are reported. Separate abstracts were prepared for the 10 sections of this report.

  19. Chemistry Division annual progress report for period ending April 30, 1993

    SciTech Connect

    Poutsma, M.L.; Ferris, L.M.; Mesmer, R.E.

    1993-08-01

    The Chemistry Division conducts basic and applied chemical research on projects important to DOE`s missions in sciences, energy technologies, advanced materials, and waste management/environmental restoration; it also conducts complementary research for other sponsors. The research are arranged according to: coal chemistry, aqueous chemistry at high temperatures and pressures, geochemistry, chemistry of advanced inorganic materials, structure and dynamics of advanced polymeric materials, chemistry of transuranium elements and compounds, chemical and structural principles in solvent extraction, surface science related to heterogeneous catalysis, photolytic transformations of hazardous organics, DNA sequencing and mapping, and special topics.

  20. The History of Metals and Ceramics Division

    SciTech Connect

    Craig, D.F.

    1999-01-01

    The division was formed in 1946 at the suggestion of Dr. Eugene P. Wigner to attack the problem of the distortion of graphite in the early reactors due to exposure to reactor neutrons, and the consequent radiation damage. It was called the Metallurgy Division and assembled the metallurgical and solid state physics activities of the time which were not directly related to nuclear weapons production. William A. Johnson, a Westinghouse employee, was named Division Director in 1946. In 1949 he was replaced by John H Frye Jr. when the Division consisted of 45 people. He was director during most of what is called the Reactor Project Years until 1973 and his retirement. During this period the Division evolved into three organizational areas: basic research, applied research in nuclear reactor materials, and reactor programs directly related to a specific reactor(s) being designed or built. The Division (Metals and Ceramics) consisted of 204 staff members in 1973 when James R. Weir, Jr., became Director. This was the period of the oil embargo, the formation of the Energy Research and Development Administration (ERDA) by combining the Atomic Energy Commission (AEC) with the Office of Coal Research, and subsequent formation of the Department of Energy (DOE). The diversification process continued when James O. Stiegler became Director in 1984, partially as a result of the pressure of legislation encouraging the national laboratories to work with U.S. industries on their problems. During that time the Division staff grew from 265 to 330. Douglas F. Craig became Director in 1992.

  1. Energy Technology Division research summary - 1999.

    SciTech Connect

    1999-03-31

    The Energy Technology Division provides materials and engineering technology support to a wide range of programs important to the US Department of Energy. As shown on the preceding page, the Division is organized into ten sections, five with concentrations in the materials area and five in engineering technology. Materials expertise includes fabrication, mechanical properties, corrosion, friction and lubrication, and irradiation effects. Our major engineering strengths are in heat and mass flow, sensors and instrumentation, nondestructive testing, transportation, and electromechanics and superconductivity applications. The Division Safety Coordinator, Environmental Compliance Officers, Quality Assurance Representative, Financial Administrator, and Communication Coordinator report directly to the Division Director. The Division Director is personally responsible for cultural diversity and is a member of the Laboratory-wide Cultural Diversity Advisory Committee. The Division's capabilities are generally applied to issues associated with energy production, transportation, utilization, or conservation, or with environmental issues linked to energy. As shown in the organization chart on the next page, the Division reports administratively to the Associate Laboratory Director (ALD) for Energy and Environmental Science and Technology (EEST) through the General Manager for Environmental and Industrial Technologies. While most of our programs are under the purview of the EEST ALD, we also have had programs funded under every one of the ALDs. Some of our research in superconductivity is funded through the Physical Research Program ALD. We also continue to work on a number of nuclear-energy-related programs under the ALD for Engineering Research. Detailed descriptions of our programs on a section-by-section basis are provided in the remainder of this book.

  2. Life Sciences Division Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Yost, B.

    1999-01-01

    The Ames Research Center (ARC) is responsible for the development, integration, and operation of non-human life sciences payloads in support of NASA's Gravitational Biology and Ecology (GB&E) program. To help stimulate discussion and interest in the development and application of novel technologies for incorporation within non-human life sciences experiment systems, three hardware system models will be displayed with associated graphics/text explanations. First, an Animal Enclosure Model (AEM) will be shown to communicate the nature and types of constraints physiological researchers must deal with during manned space flight experiments using rodent specimens. Second, a model of the Modular Cultivation System (MCS) under development by ESA will be presented to highlight technologies that may benefit cell-based research, including advanced imaging technologies. Finally, subsystems of the Cell Culture Unit (CCU) in development by ARC will also be shown. A discussion will be provided on candidate technology requirements in the areas of specimen environmental control, biotelemetry, telescience and telerobotics, and in situ analytical techniques and imaging. In addition, an overview of the Center for Gravitational Biology Research facilities will be provided.

  3. Physics Division progress report, January 1, 1984-September 30, 1986

    SciTech Connect

    Keller, W.E.

    1987-10-01

    This report provides brief accounts of significant progress in development activities and research results achieved by Physics Division personnel during the period January 1, 1984, through September 31, 1986. These efforts are representative of the three main areas of experimental research and development in which the Physics Division serves Los Alamos National Laboratory's and the Nation's needs in defense and basic sciences: (1) defense physics, including the development of diagnostic methods for weapons tests, weapon-related high-energy-density physics, and programs supporting the Strategic Defense Initiative; (2) laser physics and applications, especially to high-density plasmas; and (3) fundamental research in nuclear and particle physics, condensed-matter physics, and biophysics. Throughout the report, emphasis is placed on the design, construction, and application of a variety of advanced, often unique, instruments and instrument systems that maintain the Division's position at the leading edge of research and development in the specific fields germane to its mission. A sampling of experimental systems of particular interest would include the relativistic electron-beam accelerator and its applications to high-energy-density plasmas; pulsed-power facilities; directed energy weapon devices such as free-electron lasers and neutral-particle-beam accelerators; high-intensity ultraviolet and x-ray beam lines at the National Synchrotron Light Source (at Brookhaven National Laboratory); the Aurora KrF ultraviolet laser system for projected use as an inertial fusion driver; antiproton physics facility at CERN; and several beam developments at the Los Alamos Meson Physics Facility for studying nuclear, condensed-matter, and biological physics, highlighted by progress in establishing the Los Alamos Neutron Scattering Center.

  4. Bacterial cell division proteins as antibiotic targets.

    PubMed

    den Blaauwen, Tanneke; Andreu, José M; Monasterio, Octavio

    2014-08-01

    Proteins involved in bacterial cell division often do not have a counterpart in eukaryotic cells and they are essential for the survival of the bacteria. The genetic accessibility of many bacterial species in combination with the Green Fluorescence Protein revolution to study localization of proteins and the availability of crystal structures has increased our knowledge on bacterial cell division considerably in this century. Consequently, bacterial cell division proteins are more and more recognized as potential new antibiotic targets. An international effort to find small molecules that inhibit the cell division initiating protein FtsZ has yielded many compounds of which some are promising as leads for preclinical use. The essential transglycosylase activity of peptidoglycan synthases has recently become accessible to inhibitor screening. Enzymatic assays for and structural information on essential integral membrane proteins such as MraY and FtsW involved in lipid II (the peptidoglycan building block precursor) biosynthesis have put these proteins on the list of potential new targets. This review summarises and discusses the results and approaches to the development of lead compounds that inhibit bacterial cell division.

  5. Energy Technology Division research summary -- 1994

    SciTech Connect

    Not Available

    1994-09-01

    Research funded primarily by the NRC is directed toward assessing the roles of cyclic fatigue, intergranular stress corrosion cracking, and irradiation-assisted stress corrosion cracking on failures in light water reactor (LWR) piping systems, pressure vessels, and various core components. In support of the fast reactor program, the Division has responsibility for fuel-performance modeling and irradiation testing. The Division has major responsibilities in several design areas of the proposed International Thermonuclear Experimental Reactor (ITER). The Division supports the DOE in ensuring safe shipment of nuclear materials by providing extensive review of the Safety Analysis Reports for Packaging (SARPs). Finally, in the nuclear area they are investigating the safe disposal of spent fuel and waste. In work funded by DOE`s Energy Efficiency and Renewable Energy, the high-temperature superconductivity program continues to be a major focal point for industrial interactions. Coatings and lubricants developed in the division`s Tribology Section are intended for use in transportation systems of the future. Continuous fiber ceramic composites are being developed for high-performance heat engines. Nondestructive testing techniques are being developed to evaluate fiber distribution and to detect flaws. A wide variety of coatings for corrosion protection of metal alloys are being studied. These can increase lifetimes significant in a wide variety of coal combustion and gasification environments.

  6. Section III, Division 5 - Development And Future Directions

    SciTech Connect

    Morton, Dana K.; Jetter, Robert I; Nestell, James E.; Burchell, Timothy D; Sham, Sam

    2012-01-01

    This paper provides commentary on a new division under Section III of the ASME Boiler and Pressure Vessel (BPV) Code. This new Division 5 has an issuance date of November 1, 2011 and is part of the 2011 Addenda to the 2010 Edition of the BPV Code. The new Division covers the rules for the design, fabrication, inspection and testing of components for high temperature nuclear reactors. Information is provided on the scope and need for Division 5, the structure of Division 5, where the rules originated, the various changes made in finalizing Division 5, and the future near-term and long-term expectations for Division 5 development.

  7. Experimental systems to explore life origin: perspectives for understanding primitive mechanisms of cell division.

    PubMed

    Adamala, Katarzyna; Luisi, Pier Luigi

    2011-01-01

    Compartmentalization is a necessary element for the development of any cell cycle and the origin of speciation. Changes in shape and size of compartments might have been the first manifestation of development of so-called cell cycles. Cell growth and division, processes guided by biological reactions in modern cells, might have originated as purely physicochemical processes. Modern cells use enzymes to initiate and control all stages of cell cycle. Protocells, in the absence of advanced enzymatic machinery, might have needed to rely on physical properties of the membrane. As the division processes could not have been controlled by the cell's metabolism, the first protocells probably did not undergo regular cell cycles as we know it in cells of today. More likely, the division of protocells was triggered either by some inorganic catalyzing factor, such as porous surface, or protocells divided when the encapsulated contents reached some critical concentration.

  8. Defense Science Board Report on Advanced Computing

    DTIC Science & Technology

    2009-03-01

    complex computational  issues are  pursued , and that several vendors remain at  the  leading edge of  supercomputing  capability  in  the U.S.  In... pursuing   the  ASC  program  to  help  assure  that  HPC  advances  are  available  to  the  broad  national  security  community. As  in  the past, many...apply HPC  to  technical  problems  related  to  weapons  physics,  but  that  are  entirely  unclassified.  Examples include explosive  astrophysical

  9. Biology Division progress report, October 1, 1991--September 30, 1993

    SciTech Connect

    Hartman, F.C.; Cook, J.S.

    1993-10-01

    This Progress Report summarizes the research endeavors of the Biology Division of the Oak Ridge National Laboratory during the period October 1, 1991, through September 30, 1993. The report is structured to provide descriptions of current activities and accomplishments in each of the Division`s major organizational units. Lists of information to convey the entire scope of the Division`s activities are compiled at the end of the report.

  10. Division V: Commission 42: Close Binaries

    NASA Astrophysics Data System (ADS)

    Ribas, Ignasi; Richards, Mercedes T.; Rucinski, Slavek; Bradstreet, David H.; Harmanec, Petr; Kaluzny, Janusz; Mikolajewska, Joanna; Munari, Ulisse; Niarchos, Panagiotis; Olah, Katalin; Pribulla, Theodor; Scarfe, Colin D.; Torres, Guillermo

    2015-08-01

    Commission 42 (C42) co-organized, together with Commission 27 (C27) and Division V (Div V) as a whole, a full day of science and business sessions that were held on 24 August 2012. The program included time slots for discussion of business matters related to Div V, C27 and C42, and two sessions of 2 hours each devoted to science talks of interest to both C42 and C27. In addition, we had a joint session between Div IV and Div V motivated by the proposal to reformulate the division structure of the IAU and the possible merger of the two divisions into a new Div G. The current report gives an account of the matters discussed during the business session of C42.

  11. Cell Division and Evolution of Biological Tissues

    NASA Astrophysics Data System (ADS)

    Rivier, Nicolas; Arcenegui-Siemens, Xavier; Schliecker, Gudrun

    A tissue is a geometrical, space-filling, random cellular network; it remains in this steady state while individual cells divide. Cell division (fragmentation) is a local, elementary topological transformation which establishes statistical equilibrium of the structure. Statistical equilibrium is characterized by observable relations (Lewis, Aboav) between cell shapes, sizes and those of their neighbours, obtained through maximum entropy and topological correlation extending to nearest neighbours only, i.e. maximal randomness. For a two-dimensional tissue (epithelium), the distribution of cell shapes and that of mother and daughter cells can be obtained from elementary geometrical and physical arguments, except for an exponential factor favouring division of larger cells, and exponential and combinatorial factors encouraging a most symmetric division. The resulting distributions are very narrow, and stationarity severely restricts the range of an adjustable structural parameter

  12. The centrosome and asymmetric cell division

    PubMed Central

    2009-01-01

    Asymmetric stem cell division is a mechanism widely employed by the cell to maintain tissue homeostasis, resulting in the production of one stem cell and one differentiating cell. However, asymmetric cell division is not limited to stem cells and is widely observed even in unicellular organisms as well as in cells that make up highly complex tissues. In asymmetric cell division, cells must organize their intracellular components along the axis of asymmetry (sometimes in the context of extracellular architecture). Recent studies have described cell asymmetry in many cell types and in many cases such asymmetry involves the centrosome (or spindle pole body in yeast) as the center of cytoskeleton organization. In this review, I summarize recent discoveries in cellular polarity that lead to an asymmetric outcome, with a focus on centrosome function. PMID:19458491

  13. Family division in China's transitional economy.

    PubMed

    Chen, Feinian

    2009-03-01

    Using a longitudinal data-set (the China Health and Nutrition Survey) we explored the effect of various economic factors, including household wealth, employment sector, and involvement in a household business on the division of extended families in China's transitional economy. Results from event history analyses suggest that these economic factors act as either a dividing or a unifying force on the extended family. Household wealth reduces the risk of family division, but the effect is weaker for families in which parents have upper secondary education. In addition, an extended family is more likely to divide when married children work in the state sector. Further, the probability of family division is higher in families where daughters-in-law work in the state sector than in those with sons in this sector. Finally, involvement in a household business for married children increases family stability.

  14. The Astrophysics Science Division Annual Report 2008

    NASA Technical Reports Server (NTRS)

    Oegerle, William; Reddy, Francis; Tyler, Pat

    2009-01-01

    The Astrophysics Science Division (ASD) at Goddard Space Flight Center (GSFC) is one of the largest and most diverse astrophysical organizations in the world, with activities spanning a broad range of topics in theory, observation, and mission and technology development. Scientific research is carried out over the entire electromagnetic spectrum from gamma rays to radio wavelengths as well as particle physics and gravitational radiation. Members of ASD also provide the scientific operations for three orbiting astrophysics missions WMAP, RXTE, and Swift, as well as the Science Support Center for the Fermi Gamma-ray Space Telescope. A number of key technologies for future missions are also under development in the Division, including X-ray mirrors, and new detectors operating at gamma-ray, X-ray, ultraviolet, infrared, and radio wavelengths. This report includes the Division's activities during 2008.

  15. Large-scale Particle Simulations for Debris Flows using Dynamic Load Balance on a GPU-rich Supercomputer

    NASA Astrophysics Data System (ADS)

    Tsuzuki, Satori; Aoki, Takayuki

    2016-04-01

    Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.

  16. Moonlet Wakes in Saturn's Cassini Division

    NASA Astrophysics Data System (ADS)

    Spilker, L. J.; Showalter, M. R.

    1997-07-01

    We have detected several features with wavelike characteristics in the Voyager Radio Science (RSS) earth occultation data and Voyager photopolarimeter (PPS) stellar occultation data of Saturn's Cassini Division. We identified these structures using a non-linear autoregressive power spectral algorithm called Burg. This method is powerful for detecting short sections of quasiperiodic structure. We successfully used this same technique to identify six previously unseen Pan wakes in the Voyager PPS and Voyager RSS occultation data (Horn, Showalter and Russell, 1996, {/it Icarus} {/bf 124}, 663). Applying the Burg technique to the RSS data we find a number of wavelike structures in the Cassini Division. We see three distinct features that look like moonlet wakes. Two are Cassini Division features detected by Marouf and Tyler (1986, {/it Nature} {/bf 323}, 120) in the Voyager RSS data. Flynn and Cuzzi (1989, {/it Icarus} {/bf 82}, 180) determined that these features were azimuthally symmetric in the Voyager images and were most likely not moonlet wakes. The third wavelike structure resembles an outer moonlet wake. If it is a wake it may correspond to a previously undetected moonlet located in a Cassini Division gap between 118,929 km and 118,966 km. We see at least one wavelike feature in the PPS data. This feature falls close to the outer edge of the Huygens gap in the Cassini Division. It is consistent with an outer moonlet wake. If it is a wake it may correspond to a previously undetected moonlet inside the Huygens gap. Several other wavelike features in the Cassini Division resemble moonlet wakes. We plan to pursue these structures further in the future. This work was performed at JPL/Caltech under contract with NASA.

  17. Asymmetric cell division in T lymphocyte fate diversification

    PubMed Central

    Arsenio, Janilyn; Metz, Patrick J.

    2015-01-01

    Immunological protection against microbial pathogens is dependent on robust generation of functionally diverse T lymphocyte subsets. Upon microbial infection, naïve CD4+ or CD8+ T lymphocytes can give rise to effector- and memory-fated progeny that together mediate a potent immune response. Recent advances in single-cell immunological and genomic profiling technologies have helped elucidate early and late diversification mechanisms that enable the generation of heterogeneity from single T lymphocytes. We discuss these findings here and argue that one such mechanism, asymmetric cell division, creates an early divergence in T lymphocyte fates by giving rise to daughter cells with a propensity towards the terminally differentiated effector or self-renewing memory lineages, with cell-intrinsic and -extrinsic cues from the microenvironment driving the final maturation steps. PMID:26474675

  18. Experimental Facilities Division progress report 1996--97

    SciTech Connect

    1997-04-01

    This progress report summarizes the activities of the Experimental Facilities Division (XFD) in support of the users of the Advanced Photon Source (APS), primarily focusing on the past year of operations. In September 1996, the APS began operations as a national user facility serving the US community of x-ray researchers from private industry, academic institutions, and other research organizations. The start of operations was about three months ahead of the baseline date established in 1988. This report is divided into the following sections: (1) overview; (2) user operations; (3) user administration and technical support; (4) R and D in support of view operations; (5) collaborative research; and (6) long-term strategic plans for XFD.

  19. Physics Division annual progress report, January 1-December 31, 1983

    SciTech Connect

    Trela, W.J.

    1984-12-01

    The Physics Division is organized into three major research areas: Weapons Physics, Inertial Fusion Physics, and Basic Research. In Weapons Physics, new strategic defensive research initiatives were developed in response to President Reagan's speech in May 1983. Significant advances have been made in high-speed diagnostics including electro-optic technique, fiber-optic systems, and imaging. In Inertial Fusion, the 40-kJ Antares CO/sub 2/ laser facility was completed, and the 1- by 1- by 2-m-long large-aperture module amplifier (LAM) was constructed and operated. In Basic Research, our main emphasis was on development of the Weapons Neutron Research (WNR) facility as a world-class pulsed neutron research facility

  20. GSFC Heliophysics Science Division FY2010 Annual Report

    NASA Technical Reports Server (NTRS)

    Gilbert, Holly R.; Strong, Keith T.; Saba, Julia L. R.; Clark, Judith B.; Kilgore, Robert W.; Strong, Yvonne M.

    2010-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2010, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 323 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include: Leading science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Leading the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Providing access to measurements from the Heliophysics Great Observatory through our Science Information Systems; and Communicating science results to the public and inspiring the next generation of scientists and explorers.