Sample records for advanced supercomputing division

  1. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  2. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  3. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  4. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  5. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  6. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  7. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2018-01-16

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  8. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  9. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  10. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  11. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  12. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  13. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  14. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  15. Japanese supercomputer technology.

    PubMed

    Buzbee, B L; Ewald, R H; Worlton, W J

    1982-12-17

    Under the auspices of the Ministry for International Trade and Industry the Japanese have launched a National Superspeed Computer Project intended to produce high-performance computers for scientific computation and a Fifth-Generation Computer Project intended to incorporate and exploit concepts of artificial intelligence. If these projects are successful, which appears likely, advanced economic and military research in the United States may become dependent on access to supercomputers of foreign manufacture.

  16. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  17. National Test Facility civilian agency use of supercomputers not feasible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-01

    Based on interviews with civilian agencies cited in the House report (DOE, DoEd, HHS, FEMA, NOAA), none would be able to make effective use of NTF`s excess supercomputing capabilities. These agencies stated they could not use the resources primarily because (1) NTF`s supercomputers are older machines whose performance and costs cannot match those of more advanced computers available from other sources and (2) some agencies have not yet developed applications requiring supercomputer capabilities or do not have funding to support such activities. In addition, future support for the hardware and software at NTF is uncertain, making any investment by anmore » outside user risky.« less

  18. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we

  19. Supercomputer applications in molecular modeling.

    PubMed

    Gund, T M

    1988-01-01

    An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.

  20. Supercomputing Sheds Light on the Dark Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Heitmann, Katrin

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less

  1. Computer Electromagnetics and Supercomputer Architecture

    NASA Technical Reports Server (NTRS)

    Cwik, Tom

    1993-01-01

    The dramatic increase in performance over the last decade for microporcessor computations is compared with that for the supercomputer computations. This performance, the projected performance, and a number of other issues such as cost and the inherent pysical limitations in curent supercomputer technology have naturally led to parallel supercomputers and ensemble of interconnected microprocessors.

  2. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on

  3. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less

  4. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  5. Data-intensive computing on numerically-insensitive supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Fasel, Patricia K; Habib, Salman

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  6. TOP500 Supercomputers for June 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  7. Optimization of Supercomputer Use on EADS II System

    NASA Technical Reports Server (NTRS)

    Ahmed, Ardsher

    1998-01-01

    The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.

  8. Distributed user services for supercomputers

    NASA Technical Reports Server (NTRS)

    Sowizral, Henry A.

    1989-01-01

    User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.

  9. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  10. Advances in the NASA Earth Science Division Applied Science Program

    NASA Astrophysics Data System (ADS)

    Friedl, L.; Bonniksen, C. K.; Escobar, V. M.

    2016-12-01

    The NASA Earth Science Division's Applied Science Program advances the understanding of and ability to used remote sensing data in support of socio-economic needs. The integration of socio-economic considerations in to NASA Earth Science projects has advanced significantly. The large variety of acquisition methods used has required innovative implementation options. The integration of application themes and the implementation of application science activities in flight project is continuing to evolve. The creation of the recently released Earth Science Division, Directive on Project Applications Program and the addition of an application science requirement in the recent EVM-2 solicitation document NASA's current intent. Continuing improvement in the Earth Science Applications Science Program are expected in the areas of thematic integration, Project Applications Program tailoring for Class D missions and transfer of knowledge between scientists and projects.

  11. TOP500 Supercomputers for November 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  12. Supercomputing in Aerospace

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Yee, Helen

    1987-01-01

    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.

  13. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  14. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  15. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  16. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  17. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  18. Supercomputer networking for space science applications

    NASA Technical Reports Server (NTRS)

    Edelson, B. I.

    1992-01-01

    The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.

  19. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  20. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  1. Improved Access to Supercomputers Boosts Chemical Applications.

    ERIC Educational Resources Information Center

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  2. NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations

    NASA Image and Video Library

    2016-11-23

    The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.

  3. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  4. Desktop supercomputer: what can it do?

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  5. Energy Efficient Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  6. Energy Efficient Supercomputing

    ScienceCinema

    Anypas, Katie

    2018-05-07

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  7. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  8. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  9. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  10. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  11. Payment of Advanced Placement Exam Fees by Virginia Public School Divisions and Its Impact on Advanced Placement Enrollment and Scores

    ERIC Educational Resources Information Center

    Cirillo, Mary Grupe

    2010-01-01

    The purpose of this study was to determine the impact of Virginia school divisions' policy of paying the fee for students to take Advanced Placement exams on Advanced Placement course enrollment, the number of Advanced Placement exams taken by students, the average scores earned and the percent of students earning qualifying scores of 3, 4, or 5…

  12. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  13. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  14. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  15. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  16. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  17. Towards Efficient Supercomputing: Searching for the Right Efficiency Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W

    2012-01-01

    The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has beenmore » and how it stands today with respect to efficient supercomputing.« less

  18. TOP500 Supercomputers for June 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamosmore » National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.« less

  19. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  20. NSF Commits to Supercomputers.

    ERIC Educational Resources Information Center

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  1. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    NASA Astrophysics Data System (ADS)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  2. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  3. Most Social Scientists Shun Free Use of Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  4. Intelligent supercomputers: the Japanese computer sputnik

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  5. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2017-12-09

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  6. Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-02-01

    Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.

  7. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  8. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a

  9. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  10. Ice Storm Supercomputer

    ScienceCinema

    None

    2018-05-01

    A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed "Ice Storm" this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  11. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  12. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  13. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  14. Kriging for Spatial-Temporal Data on the Bridges Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, E. M.

    2017-12-01

    Currently, kriging of spatial-temporal data is slow and limited to relatively small vector sizes. We have developed a method on the Bridges supercomputer, at the Pittsburgh supercomputer center, which uses a combination of the tools R, Fortran, the Message Passage Interface (MPI), OpenACC, and special R packages for big data. This combination of tools now permits us to complete tasks which could previously not be completed, or takes literally hours to complete. We ran simulation studies from a laptop against the supercomputer. We also look at "real world" data sets, such as the Irish wind data, and some weather data. We compare the timings. We note that the timings are suprising good.

  15. Technology transfer in the NASA Ames Advanced Life Support Division

    NASA Technical Reports Server (NTRS)

    Connell, Kathleen; Schlater, Nelson; Bilardo, Vincent; Masson, Paul

    1992-01-01

    This paper summarizes a representative set of technology transfer activities which are currently underway in the Advanced Life Support Division of the Ames Research Center. Five specific NASA-funded research or technology development projects are synopsized that are resulting in transfer of technology in one or more of four main 'arenas:' (1) intra-NASA, (2) intra-Federal, (3) NASA - aerospace industry, and (4) aerospace industry - broader economy. Each project is summarized as a case history, specific issues are identified, and recommendations are formulated based on the lessons learned as a result of each project.

  16. Predicting Hurricanes with Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  17. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  18. Supercomputing Drives Innovation - Continuum Magazine | NREL

    Science.gov Websites

    years, NREL scientists have used supercomputers to simulate 3D models of the primary enzymes and Scientist, discuss a 3D model of wind plant aerodynamics, showing low velocity wakes and impact on

  19. Prospects for Boiling of Subcooled Dielectric Liquids for Supercomputer Cooling

    NASA Astrophysics Data System (ADS)

    Zeigarnik, Yu. A.; Vasil'ev, N. V.; Druzhinin, E. A.; Kalmykov, I. V.; Kosoi, A. S.; Khodakov, K. A.

    2018-02-01

    It is shown experimentally that using forced-convection boiling of dielectric coolants of the Novec 649 Refrigerant subcooled relative to the saturation temperature makes possible removing heat flow rates up to 100 W/cm2 from modern supercomputer chip interface. This fact creates prerequisites for the application of dielectric liquids in cooling systems of modern supercomputers with increased requirements for their operating reliability.

  20. Introducing Mira, Argonne's Next-Generation Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  1. Discover Supercomputer 5

    NASA Image and Video Library

    2017-12-08

    Two rows of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS) contain more than 4,000 computer processors. Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  2. Discover Supercomputer 3

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  3. Discover Supercomputer 2

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  4. Discover Supercomputer 4

    NASA Image and Video Library

    2017-12-08

    This close-up view highlights one row—approximately 2,000 computer processors—of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS). Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  5. Discover Supercomputer 1

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  6. Roadrunner Supercomputer Breaks the Petaflop Barrier

    ScienceCinema

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2017-12-09

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  7. Supercomputer Issues from a University Perspective.

    ERIC Educational Resources Information Center

    Beering, Steven C.

    1984-01-01

    Discusses issues related to the access of and training of university researchers in using supercomputers, considering National Science Foundation's (NSF) role in this area, microcomputers on campuses, and the limited use of existing telecommunication networks. Includes examples of potential scientific projects (by subject area) utilizing…

  8. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  9. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  10. Finite element methods on supercomputers - The scatter-problem

    NASA Technical Reports Server (NTRS)

    Loehner, R.; Morgan, K.

    1985-01-01

    Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.

  11. A mass storage system for supercomputers based on Unix

    NASA Technical Reports Server (NTRS)

    Richards, J.; Kummell, T.; Zarlengo, D. G.

    1988-01-01

    The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.

  12. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  13. Multiple DNA and protein sequence alignment on a workstation and a supercomputer.

    PubMed

    Tajima, K

    1988-11-01

    This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.

  14. Library Services in a Supercomputer Center.

    ERIC Educational Resources Information Center

    Layman, Mary

    1991-01-01

    Describes library services that are offered at the San Diego Supercomputer Center (SDSC), which is located at the University of California at San Diego. Topics discussed include the user population; online searching; microcomputer use; electronic networks; current awareness programs; library catalogs; and the slide collection. A sidebar outlines…

  15. Extracting the Textual and Temporal Structure of Supercomputing Logs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, S; Singh, I; Chandra, A

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less

  16. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2018-02-07

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  17. Advancing research and practice: the revised APA Division 30 definition of hypnosis.

    PubMed

    Elkins, Gary R; Barabasz, Arreed F; Council, James R; Spiegel, David

    2015-01-01

    This article describes the history, rationale, and guidelines for developing a new definition of hypnosis by the Society of Psychological Hypnosis, Division 30 of the American Psychological Association. The definition was developed with the aim of being concise, heuristic, and allowing for alternative theories of the mechanisms (to be determined in empirical scientific study). The definition of hypnosis is presented as well as definitions of the following related terms: hypnotic induction, hypnotizability, and hypnotherapy. The implications for advancing research and practice are discussed. The definitions are presented within the article.

  18. Advancing Research and Practice: The Revised APA Division 30 Definition of Hypnosis.

    PubMed

    Elkins, Gary R; Barabasz, Arreed F; Council, James R; Spiegel, David

    2015-04-01

    This article describes the history, rationale, and guidelines for developing a new definition of hypnosis by the Society of Psychological Hypnosis, Division 30 of the American Psychological Association. The definition was developed with the aim of being concise, being heuristic, and allowing for alternative theories of the mechanisms (to be determined in empirical scientific study). The definition of hypnosis is presented as well as definitions of the following related terms: hypnotic induction, hypnotizability, and hypnotherapy. The implications for advancing research and practice are discussed. The definitions are presented within the article.

  19. A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery

    NASA Technical Reports Server (NTRS)

    Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.

    2000-01-01

    The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.

  20. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  1. The Pawsey Supercomputer geothermal cooling project

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  2. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  3. QCD on the BlueGene/L Supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  4. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  5. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  6. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  7. A Long History of Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  8. Introducing Argonne’s Theta Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Theta, the Argonne Leadership Computing Facility’s (ALCF) new Intel-Cray supercomputer, is officially open to the research community. Theta’s massively parallel, many-core architecture puts the ALCF on the path to Aurora, the facility’s future Intel-Cray system. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

  9. NSF Establishes First Four National Supercomputer Centers.

    ERIC Educational Resources Information Center

    Lepkowski, Wil

    1985-01-01

    The National Science Foundation (NSF) has awarded support for supercomputer centers at Cornell University, Princeton University, University of California (San Diego), and University of Illinois. These centers are to be the nucleus of a national academic network for use by scientists and engineers throughout the United States. (DH)

  10. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  11. Supercomputing in the Age of Discovering Superearths, Earths and Exoplanet Systems

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    2015-01-01

    NASA's Kepler Mission was launched in March 2009 as NASA's first mission capable of finding Earth-size planets orbiting in the habitable zone of Sun-like stars, that range of distances for which liquid water would pool on the surface of a rocky planet. Kepler has discovered over 1000 planets and over 4600 candidates, many of them as small as the Earth. Today, Kepler's amazing success seems to be a fait accompli to those unfamiliar with her history. But twenty years ago, there were no planets known outside our solar system, and few people believed it was possible to detect tiny Earth-size planets orbiting other stars. Motivating NASA to select Kepler for launch required a confluence of the right detector technology, advances in signal processing and algorithms, and the power of supercomputing.

  12. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  13. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  14. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  15. A Long History of Supercomputing

    ScienceCinema

    Grider, Gary

    2018-06-13

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  16. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  17. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  18. Japanese project aims at supercomputer that executes 10 gflops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burskey, D.

    1984-05-03

    Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less

  19. Advanced Code-Division Multiplexers for Superconducting Detector Arrays

    NASA Astrophysics Data System (ADS)

    Irwin, K. D.; Cho, H. M.; Doriese, W. B.; Fowler, J. W.; Hilton, G. C.; Niemack, M. D.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Vale, L. R.

    2012-06-01

    Multiplexers based on the modulation of superconducting quantum interference devices are now regularly used in multi-kilopixel arrays of superconducting detectors for astrophysics, cosmology, and materials analysis. Over the next decade, much larger arrays will be needed. These larger arrays require new modulation techniques and compact multiplexer elements that fit within each pixel. We present a new in-focal-plane code-division multiplexer that provides multiplexing elements with the required scalability. This code-division multiplexer uses compact lithographic modulation elements that simultaneously multiplex both signal outputs and superconducting transition-edge sensor (TES) detector bias voltages. It eliminates the shunt resistor used to voltage bias TES detectors, greatly reduces power dissipation, allows different dc bias voltages for each TES, and makes all elements sufficiently compact to fit inside the detector pixel area. These in-focal plane code-division multiplexers can be combined with multi-GHz readout based on superconducting microresonators to scale to even larger arrays.

  20. Supercomputer analysis of sedimentary basins.

    PubMed

    Bethke, C M; Altaner, S P; Harrison, W J; Upson, C

    1988-01-15

    Geological processes of fluid transport and chemical reaction in sedimentary basins have formed many of the earth's energy and mineral resources. These processes can be analyzed on natural time and distance scales with the use of supercomputers. Numerical experiments are presented that give insights to the factors controlling subsurface pressures, temperatures, and reactions; the origin of ores; and the distribution and quality of hydrocarbon reservoirs. The results show that numerical analysis combined with stratigraphic, sea level, and plate tectonic histories provides a powerful tool for studying the evolution of sedimentary basins over geologic time.

  1. Probing the cosmic causes of errors in supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Cosmic rays from outer space are causing errors in supercomputers. The neutrons that pass through the CPU may be causing binary data to flip leading to incorrect calculations. Los Alamos National Laboratory has developed detectors to determine how much data is being corrupted by these cosmic particles.

  2. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This

  3. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  4. Access to Supercomputers. Higher Education Panel Report 69.

    ERIC Educational Resources Information Center

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  5. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  6. Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.

    PubMed

    Berger, S B; Reis, D J

    1995-02-01

    We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.

  7. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  8. Designing a connectionist network supercomputer.

    PubMed

    Asanović, K; Beck, J; Feldman, J; Morgan, N; Wawrzynek, J

    1993-12-01

    This paper describes an effort at UC Berkeley and the International Computer Science Institute to develop a supercomputer for artificial neural network applications. Our perspective has been strongly influenced by earlier experiences with the construction and use of a simpler machine. In particular, we have observed Amdahl's Law in action in our designs and those of others. These observations inspire attention to many factors beyond fast multiply-accumulate arithmetic. We describe a number of these factors along with rough expressions for their influence and then give the applications targets, machine goals and the system architecture for the machine we are currently designing.

  9. Building black holes: supercomputer cinema.

    PubMed

    Shapiro, S L; Teukolsky, S A

    1988-07-22

    A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.

  10. Next Generation Security for the 10,240 Processor Columbia System

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)

    2005-01-01

    This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.

  11. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  12. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  13. Data communication requirements for the advanced NAS network

    NASA Technical Reports Server (NTRS)

    Levin, Eugene; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.

  14. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  15. The Sky's the Limit When Super Students Meet Supercomputers.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  16. Supercomputer use in orthopaedic biomechanics research: focus on functional adaptation of bone.

    PubMed

    Hart, R T; Thongpreda, N; Van Buskirk, W C

    1988-01-01

    The authors describe two biomechanical analyses carried out using numerical methods. One is an analysis of the stress and strain in a human mandible, and the other analysis involves modeling the adaptive response of a sheep bone to mechanical loading. The computing environment required for the two types of analyses is discussed. It is shown that a simple stress analysis of a geometrically complex mandible can be accomplished using a minicomputer. However, more sophisticated analyses of the same model with dynamic loading or nonlinear materials would require supercomputer capabilities. A supercomputer is also required for modeling the adaptive response of living bone, even when simple geometric and material models are use.

  17. NOAA announces significant investment in next generation of supercomputers

    Science.gov Websites

    provide more timely, accurate weather forecasts. (Credit: istockphoto.com) Today, NOAA announced the next phase in the agency's efforts to increase supercomputing capacity to provide more timely, accurate turn will lead to more timely, accurate, and reliable forecasts." Ahead of this upgrade, each of

  18. Group-based variant calling leveraging next-generation supercomputing for large-scale whole-genome sequencing studies.

    PubMed

    Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J

    2015-09-22

    Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.

  19. Sequence search on a supercomputer.

    PubMed

    Gotoh, O; Tagashira, Y

    1986-01-10

    A set of programs was developed for searching nucleic acid and protein sequence data bases for sequences similar to a given sequence. The programs, written in FORTRAN 77, were optimized for vector processing on a Hitachi S810-20 supercomputer. A search of a 500-residue protein sequence against the entire PIR data base Ver. 1.0 (1) (0.5 M residues) is carried out in a CPU time of 45 sec. About 4 min is required for an exhaustive search of a 1500-base nucleotide sequence against all mammalian sequences (1.2M bases) in Genbank Ver. 29.0. The CPU time is reduced to about a quarter with a faster version.

  20. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  1. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  2. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  3. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  4. Optical clock distribution in supercomputers using polyimide-based waveguides

    NASA Astrophysics Data System (ADS)

    Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.

    1999-04-01

    Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  5. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  6. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  7. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  8. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  9. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  10. The TESS science processing operations center

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland

    2016-08-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  11. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  12. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  13. A report documenting the completion of the Los Alamos National Laboratory portion of the ASC level II milestone ""Visualization on the supercomputing platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Patchett, John M; Lo, Li - Ta

    2011-01-24

    This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. Formore » terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider

  14. Supercomputers Join the Fight against Cancer – U.S. Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.

  15. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  16. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  17. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  18. LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments

    DTIC Science & Technology

    2015-11-20

    1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming

  19. Urethral advancement in hypospadias with a distal division of the corpus spongiosum: outcome in 158 cases.

    PubMed

    Thiry, S; Gorduza, D; Mouriquand, P

    2014-06-01

    Outcome of urethral mobilization and advancement (Koff procedure) in hypospadias with a distal division of the corpus spongiosum and redo cases with distal urethral failure. From January 1999 to November 2012, 158 children with a distal hypospadias (115 primary cases and 43 redo cases) underwent surgical repair using the Koff technique with a median age at surgery of 21 months (range, 12-217 months). Mean follow-up was 19 months (median, 14 months). Thirty patients (19%) presented with a complication (13.9% in primary cases and 32.5% in redo surgery) mostly at the beginning of our experience. Meatal stenosis was the most common one (3.5% in primary case, 6% overall). Ventral curvature (>10°), which is considered as a possible long-term iatrogenic complication of the Koff procedure, was not found in patients with fully grown penis except in one redo patient who had, retrospectively, an inadequate indication for this type of repair. Of 158 patients, 33 reached the age of puberty (>14 years old) with a mean follow-up of 34 months, only one presented with a significant ventral curvature. Urethral mobilization and advancement is a reasonable alternative for anterior hypospadias and distal fistula repair in selected cases. It has two major advantages compared to other techniques: it avoids any urethroplasty with non-urethral tissue and eliminates dysplastic tissues located beyond the division of the corpus spongiosum, which may not grow at the same pace as the rest of the penis. Significant iatrogenic curvature in fully grown penis is not supported by this series. Copyright © 2013 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  20. The BlueGene/L supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanota, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2003-05-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32×32×64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  1. NSF Says It Will Support Supercomputer Centers in California and Illinois.

    ERIC Educational Resources Information Center

    Strosnider, Kim; Young, Jeffrey R.

    1997-01-01

    The National Science Foundation will increase support for supercomputer centers at the University of California, San Diego and the University of Illinois, Urbana-Champaign, while leaving unclear the status of the program at Cornell University (New York) and a cooperative Carnegie-Mellon University (Pennsylvania) and University of Pittsburgh…

  2. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  3. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    ScienceCinema

    Zgurskaya, Helen; Smith, Jeremy

    2018-06-13

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  4. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  5. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  6. Multi-petascale highly efficient parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less

  7. Hurricane Research Division of AOML/NOAA

    Science.gov Websites

    Statement The mission of NOAA's Hurricane Research Division (HRD) is to advance the understanding and Learn More. What's New Links of Interest Hurricane Field Program Current Hurricane Data Hurricane FAQ

  8. AICD -- Advanced Industrial Concepts Division Biological and Chemical Technologies Research Program. 1993 Annual summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersen, G.; Bair, K.; Ross, J.

    1994-03-01

    The annual summary report presents the fiscal year (FY) 1993 research activities and accomplishments for the United States Department of Energy (DOE) Biological and Chemical Technologies Research (BCTR) Program of the Advanced Industrial Concepts Division (AICD). This AICD program resides within the Office of Industrial Technologies (OIT) of the Office of Energy Efficiency and Renewable Energy (EE). The annual summary report for 1993 (ASR 93) contains the following: A program description (including BCTR program mission statement, historical background, relevance, goals and objectives), program structure and organization, selected technical and programmatic highlights for 1993, detailed descriptions of individual projects, a listingmore » of program output, including a bibliography of published work, patents, and awards arising from work supported by BCTR.« less

  9. The impact of the U.S. supercomputing initiative will be global

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  10. Supercomputing 2002: NAS Demo Abstracts

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor)

    2002-01-01

    The hyperwall is a new concept in visual supercomputing, conceived and developed by the NAS Exploratory Computing Group. The hyperwall will allow simultaneous and coordinated visualization and interaction of an array of processes, such as a the computations of a parameter study or the parallel evolutions of a genetic algorithm population. Making over 65 million pixels available to the user, the hyperwall will enable and elicit qualitatively new ways of leveraging computers to accomplish science. It is currently still unclear whether we will be able to transport the hyperwall to SC02. The crucial display frame still has not been completed by the metal fabrication shop, although they promised an August delivery. Also, we are still working the fragile node issue, which may require transplantation of the compute nodes from the present 2U cases into 3U cases. This modification will increase the present 3-rack configuration to 5 racks.

  11. Developments in the simulation of compressible inviscid and viscous flow on supercomputers

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Buning, P. G.

    1985-01-01

    In anticipation of future supercomputers, finite difference codes are rapidly being extended to simulate three-dimensional compressible flow about complex configurations. Some of these developments are reviewed. The importance of computational flow visualization and diagnostic methods to three-dimensional flow simulation is also briefly discussed.

  12. The TESS Science Processing Operations Center

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  13. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  14. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  15. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Technical Reports Server (NTRS)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  16. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Astrophysics Data System (ADS)

    Landgrebe, Anton J.

    1987-03-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  17. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  18. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  19. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  20. Website for the Space Science Division

    NASA Technical Reports Server (NTRS)

    Schilling, James; DeVincenzi, Donald (Technical Monitor)

    2002-01-01

    The Space Science Division at NASA Ames Research Center is dedicated to research in astrophysics, exobiology, advanced life support technologies, and planetary science. These research programs are structured around Astrobiology (the study of life in the universe and the chemical and physical forces and adaptions that influence life's origin, evolution, and destiny), and address some of the most fundamental questions pursued by science. These questions examine the origin of life and our place in the universe. Ames is recognized as a world leader in Astrobiology. In pursuing our mission in Astrobiology, Space Science Division scientists perform pioneering basic research and technology development.

  1. Instrumentation and Controls Division Overview: Sensors Development for Harsh Environments at Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Zeller, Mary V.; Lei, Jih-Fen

    2002-01-01

    The Instrumentation and Controls Division is responsible for planning, conducting and directing basic and applied research on advanced instrumentation and controls technologies for aerospace propulsion and power applications. The Division's advanced research in harsh environment sensors, high temperature high power electronics, MEMS (microelectromechanical systems), nanotechnology, high data rate optical instrumentation, active and intelligent controls, and health monitoring and management will enable self-feeling, self-thinking, self-reconfiguring and self-healing Aerospace Propulsion Systems. These research areas address Agency challenges to deliver aerospace systems with reduced size and weight, and increased functionality and intelligence for future NASA missions in advanced aeronautics, economical space transportation, and pioneering space exploration. The Division also actively supports educational and technology transfer activities aimed at benefiting all humankind.

  2. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  3. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  4. The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris

    2015-08-01

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less

  5. 75 FR 45158 - Holcim (US) Inc. Corporate Division Including On-Site Leased Workers From Manpower, Office Team...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-02

    .... Corporate Division Including On-Site Leased Workers From Manpower, Office Team and Advance Temporary... from Manpower and Office Team, Dundee, Michigan. The notice was published in the Federal Register on... Holcim (US) Inc., Corporate Division, including on-site leased workers from Manpower, Office Team Advance...

  6. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  7. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential ofmore » PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.« less

  8. High performance computing for advanced modeling and simulation of materials

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang

    2017-02-01

    The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.

  9. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  10. Will Your Next Supercomputer Come from Costco?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way throughmore » the procurement process.« less

  11. Advances in nickel hydrogen technology at Yardney Battery Division

    NASA Technical Reports Server (NTRS)

    Bentley, J. G.; Hall, A. M.

    1987-01-01

    The current major activites in nickel hydrogen technology being addressed at Yardney Battery Division are outlined. Five basic topics are covered: an update on life cycle testing of ManTech 50 AH NiH2 cells in the LEO regime; an overview of the Air Force/industry briefing; nickel electrode process upgrading; 4.5 inch cell development; and bipolar NiH2 battery development.

  12. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  13. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  14. Solving global shallow water equations on heterogeneous supercomputers

    PubMed Central

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428

  15. RESEARCH AND TECHNOLOGY DIVISION REPORT FOR 1966.

    ERIC Educational Resources Information Center

    BAUM, C.

    THE WORK OF THE RESEARCH AND TECHNOLOGY DIVISION OF SYSTEM DEVELOPMENT CORPORATION DURING 1966 IS REPORTED. THE PROGRESS OF VARIOUS STUDIES AND ACTIVITIES DISCUSSED IN THE REPORT WERE ADVANCED PROGRAMING, INFORMATION PROCESSING RESEARCH, PROGRAMING SYSTEMS, DATA BASE SYSTEMS. LANGUAGE PROCESSING AND RETRIEVAL, BEHAVIORAL GAMING AND SIMULATION…

  16. Opportunities for leveraging OS virtualization in high-end supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  17. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  18. Supercomputer modeling of hydrogen combustion in rocket engines

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Nikitin, V. F.; Altukhov, D. I.; Dushin, V. R.; Koo, Jaye

    2013-08-01

    Hydrogen being an ecological fuel is very attractive now for rocket engines designers. However, peculiarities of hydrogen combustion kinetics, the presence of zones of inverse dependence of reaction rate on pressure, etc. prevents from using hydrogen engines in all stages not being supported by other types of engines, which often brings the ecological gains back to zero from using hydrogen. Computer aided design of new effective and clean hydrogen engines needs mathematical tools for supercomputer modeling of hydrogen-oxygen components mixing and combustion in rocket engines. The paper presents the results of developing verification and validation of mathematical model making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  19. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  20. The PMS project: Poor man's supercomputer

    NASA Astrophysics Data System (ADS)

    Csikor, F.; Fodor, Z.; Hegedüs, P.; Horváth, V. K.; Katz, S. D.; Piróth, A.

    2001-02-01

    We briefly describe the Poor Man's Supercomputer (PMS) project carried out at Eötvös University, Budapest. The goal was to construct a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To this end we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of PMS includes 32 nodes (PMS1). The performance of PMS1 was tested by Lattice Gauge Theory simulations. Using pure SU(3) gauge theory or the bosonic part of the minimal supersymmetric extention of the standard model (MSSM) on PMS1 we obtained 3 / Mflops and 0.60 / Mflops price-to-sustained performance ratio for double and single precision operations, respectively. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  1. Collaborative Supercomputing for Global Change Science

    NASA Astrophysics Data System (ADS)

    Nemani, R.; Votava, P.; Michaelis, A.; Melton, F.; Milesi, C.

    2011-03-01

    There is increasing pressure on the science community not only to understand how recent and projected changes in climate will affect Earth's global environment and the natural resources on which society depends but also to design solutions to mitigate or cope with the likely impacts. Responding to this multidimensional challenge requires new tools and research frameworks that assist scientists in collaborating to rapidly investigate complex interdisciplinary science questions of critical societal importance. One such collaborative research framework, within the NASA Earth sciences program, is the NASA Earth Exchange (NEX). NEX combines state-of-the-art supercomputing, Earth system modeling, remote sensing data from NASA and other agencies, and a scientific social networking platform to deliver a complete work environment. In this platform, users can explore and analyze large Earth science data sets, run modeling codes, collaborate on new or existing projects, and share results within or among communities (see Figure S1 in the online supplement to this Eos issue (http://www.agu.org/eos_elec)).

  2. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  3. Advanced Thermal Batteries.

    DTIC Science & Technology

    1981-06-01

    ADVANCED THERMAL BATTERIES NATIONAL UNION ELECTRIC CORPORATION ADVANCE SCIENCE DIVISION 1201 E. BELL STREET BLXXMINGTON, ILLINOIS 61701 JUNE 1981...December 1978 in: " Advanced Thermal Batteries " AFAPL-TR-78-114 Air Force Aero Propulsion Laboratory Air Force Wright Aeronautical Laboratories Air Force...March 1980 in: " Advanced Thermal Batteries " AFAPL-TR-80-2017 Air Force Aero Propulsion Laboratory Air Force Wright Aeronautical Laboratories Air Force

  4. Cell cycles and cell division in the archaea.

    PubMed

    Samson, Rachel Y; Bell, Stephen D

    2011-06-01

    Until recently little was known about the cell cycle parameters and division mechanisms of archaeal organisms. Although this is still the case for the majority of archaea, significant advances have been made in some model species. The information that has been gleaned thus far points to a remarkable degree of diversity within the archaeal domain of life. More specifically, members of distinct phyla have very different chromosome copy numbers, replication control systems and even employ distinct machineries for cell division. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  6. An orthogonal wavelet division multiple-access processor architecture for LTE-advanced wireless/radio-over-fiber systems over heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Mahapatra, Chinmaya; Leung, Victor CM; Stouraitis, Thanos

    2014-12-01

    The increase in internet traffic, number of users, and availability of mobile devices poses a challenge to wireless technologies. In long-term evolution (LTE) advanced system, heterogeneous networks (HetNet) using centralized coordinated multipoint (CoMP) transmitting radio over optical fibers (LTE A-ROF) have provided a feasible way of satisfying user demands. In this paper, an orthogonal wavelet division multiple-access (OWDMA) processor architecture is proposed, which is shown to be better suited to LTE advanced systems as compared to orthogonal frequency division multiple access (OFDMA) as in LTE systems 3GPP rel.8 (3GPP, http://www.3gpp.org/DynaReport/36300.htm). ROF systems are a viable alternative to satisfy large data demands; hence, the performance in ROF systems is also evaluated. To validate the architecture, the circuit is designed and synthesized on a Xilinx vertex-6 field-programmable gate array (FPGA). The synthesis results show that the circuit performs with a clock period as short as 7.036 ns (i.e., a maximum clock frequency of 142.13 MHz) for transform size of 512. A pipelined version of the architecture reduces the power consumption by approximately 89%. We compare our architecture with similar available architectures for resource utilization and timing and provide performance comparison with OFDMA systems for various quality metrics of communication systems. The OWDMA architecture is found to perform better than OFDMA for bit error rate (BER) performance versus signal-to-noise ratio (SNR) in wireless channel as well as ROF media. It also gives higher throughput and mitigates the bad effect of peak-to-average-power ratio (PAPR).

  7. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  8. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  9. Solid State Division progress report for period ending September 30, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, P.H.; Hinton, L.W.

    1994-08-01

    This report covers research progress in the Solid State Division from April 1, 1992, to September 30, 1993. During this period, the division conducted a broad, interdisciplinary materials research program with emphasis on theoretical solid state physics, neutron scattering, synthesis and characterization of materials, ion beam and laser processing, and the structure of solids and surfaces. This research effort was enhanced by new capabilities in atomic-scale materials characterization, new emphasis on the synthesis and processing of materials, and increased partnering with industry and universities. The theoretical effort included a broad range of analytical studies, as well as a new emphasismore » on numerical simulation stimulated by advances in high-performance computing and by strong interest in related division experimental programs. Superconductivity research continued to advance on a broad front from fundamental mechanisms of high-temperature superconductivity to the development of new materials and processing techniques. The Neutron Scattering Program was characterized by a strong scientific user program and growing diversity represented by new initiatives in complex fluids and residual stress. The national emphasis on materials synthesis and processing was mirrored in division research programs in thin-film processing, surface modification, and crystal growth. Research on advanced processing techniques such as laser ablation, ion implantation, and plasma processing was complemented by strong programs in the characterization of materials and surfaces including ultrahigh resolution scanning transmission electron microscopy, atomic-resolution chemical analysis, synchrotron x-ray research, and scanning tunneling microscopy.« less

  10. The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    1997-01-01

    In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…

  11. Optimal wavelength-space crossbar switches for supercomputer optical interconnects.

    PubMed

    Roudas, Ioannis; Hemenway, B Roe; Grzybowski, Richard R; Karinou, Fotini

    2012-08-27

    We propose a most economical design of the Optical Shared MemOry Supercomputer Interconnect System (OSMOSIS) all-optical, wavelength-space crossbar switch fabric. It is shown, by analysis and simulation, that the total number of on-off gates required for the proposed N × N switch fabric can scale asymptotically as N ln N if the number of input/output ports N can be factored into a product of small primes. This is of the same order of magnitude as Shannon's lower bound for switch complexity, according to which the minimum number of two-state switches required for the construction of a N × N permutation switch is log2 (N!).

  12. Argonne Discovery Yields Self-Healing Diamond-Like Carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunningham, Greg; Jones, Katie Elyce

    We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.

  13. Argonne Discovery Yields Self-Healing Diamond-Like Carbon

    DOE PAGES

    Cunningham, Greg; Jones, Katie Elyce

    2016-10-27

    We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.

  14. Commercialization of Advanced Communications Technology Satellite (ACTS) technology

    NASA Astrophysics Data System (ADS)

    Plecity, Mark S.; Strickler, Walter M.; Bauer, Robert A.

    1996-03-01

    In an on-going effort to maintain United States leadership in communication satellite technology, the National Aeronautics and Space Administration (NASA), led the development of the Advanced Communications Technology Satellite (ACTS). NASA's ACTS program provides industry, academia, and government agencies the opportunity to perform both technology and telecommunication service experiments with a leading-edge communication satellite system. Over 80 organizations are using ACTS as a multi server test bed to establish communication technologies and services of the future. ACTS was designed to provide demand assigned multiple access (DAMA) digital communications with a minimum switchable circuit bandwidth of 64 Kbps, and a maximum channel bandwidth of 900 MHZ. It can, therefore, provide service to thin routes as well as connect fiber backbones in supercomputer networks, across oceans, or restore full communications in the event of national or manmade disaster. Service can also be provided to terrestrial and airborne mobile users. Commercial applications of ACTS technologies include: telemedicine; distance education; Department of Defense operations; mobile communications, aeronautical applications, terrestrial applications, and disaster recovery. This paper briefly describes the ACTS system and the enabling technologies employed by ACTS including Ka-band hopping spot beams, on-board routing and switching, and rain fade compensation. When used in conjunction with a time division multiple access (TDMA) architecture, these technologies provide a higher capacity, lower cost satellite system. Furthermore, examples of completed user experiments, future experiments, and plans of organizations to commercialize ACTS technology in their own future offerings will be discussed.

  15. Cell division cycle 45 promotes papillary thyroid cancer progression via regulating cell cycle.

    PubMed

    Sun, Jing; Shi, Run; Zhao, Sha; Li, Xiaona; Lu, Shan; Bu, Hemei; Ma, Xianghua

    2017-05-01

    Cell division cycle 45 was reported to be overexpressed in some cancer-derived cell lines and was predicted to be a candidate oncogene in cervical cancer. However, the clinical and biological significance of cell division cycle 45 in papillary thyroid cancer has never been investigated. We determined the expression level and clinical significance of cell division cycle 45 using The Cancer Genome Atlas, quantitative real-time polymerase chain reaction, and immunohistochemistry. A great upregulation of cell division cycle 45 was observed in papillary thyroid cancer tissues compared with adjacent normal tissues. Furthermore, overexpression of cell division cycle 45 positively correlates with more advanced clinical characteristics. Silence of cell division cycle 45 suppressed proliferation of papillary thyroid cancer cells via G1-phase arrest and inducing apoptosis. The oncogenic activity of cell division cycle 45 was also confirmed in vivo. In conclusion, cell division cycle 45 may serve as a novel biomarker and a potential therapeutic target for papillary thyroid cancer.

  16. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  17. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  18. Vectorized program architectures for supercomputer-aided circuit design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzoli, V.; Ferlito, M.; Neri, A.

    1986-01-01

    Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less

  19. Divisions of geologic time-major chronostratigraphic and geochronologic units

    USGS Publications Warehouse

    ,

    2010-01-01

    Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and is calibrated in years. Over the years, the development of new dating methods and the refinement of previous methods have stimulated revisions to geologic time scales. Advances in stratigraphy and geochronology require that any time scale be periodically updated. Therefore, Divisions of Geologic Time, which shows the major chronostratigraphic (position) and geochronologic (time) units, is intended to be a dynamic resource that will be modified to include accepted changes of unit names and boundary age estimates. This fact sheet is a modification of USGS Fact Sheet 2007-3015 by the U.S. Geological Survey Geologic Names Committee.

  20. Analytical Chemistry Division annual progress report for period ending December 31, 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: (1) Analytical Research, Development, and Implementation. The division maintains a program to conceptualize, investigate, develop, assess, improve, and implement advanced technology for chemical and physicochemical measurements. Emphasis is on problems and needs identified with ORNL and Department of Energy (DOE) programs; however, attention is also given to advancing the analytical sciences themselves. (2) Programmatic Research, Development, andmore » Utilization. The division carries out a wide variety of chemical work that typically involves analytical research and/or development plus the utilization of analytical capabilities to expedite programmatic interests. (3) Technical Support. The division performs chemical and physicochemical analyses of virtually all types. The Analytical Chemistry Division is organized into four major sections, each of which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1988. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8.« less

  1. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  2. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    PubMed

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  3. Supercomputer analysis of purine and pyrimidine metabolism leading to DNA synthesis.

    PubMed

    Heinmets, F

    1989-06-01

    A model-system is established to analyze purine and pyrimidine metabolism leading to DNA synthesis. The principal aim is to explore the flow and regulation of terminal deoxynucleoside triophosphates (dNTPs) in various input and parametric conditions. A series of flow equations are established, which are subsequently converted to differential equations. These are programmed (Fortran) and analyzed on a Cray chi-MP/48 supercomputer. The pool concentrations are presented as a function of time in conditions in which various pertinent parameters of the system are modified. The system is formulated by 100 differential equations.

  4. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  5. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports tomore » slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.« less

  6. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  7. AIC Computations Using Navier-Stokes Equations on Single Image Supercomputers For Design Optimization

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru

    2004-01-01

    A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.

  8. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, U.A.; Baumle, B.; Kohler, P.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  9. Fractions division knowledge of elementary school student: The case of Lala

    NASA Astrophysics Data System (ADS)

    Purnomo, Yoppy Wahyu; Widowati, Chairunnisa; Aziz, Tian Abdul; Pramudiani, Puri

    2017-08-01

    Division of fractions is often acknowledged by mysterious rule which is not based on conceptual knowledge. The purpose of the study was to explore elementary school student's knowledge of division fractions. For this purpose, a case study was conducted. The participant of the study was Lala (pseudonym) who enrolled at one elementary school in East Jakarta. The data were collected by administering written test and semi-structured interview respectively. The findings of the study indicated that Lala was able to describe strategy of division fractions as inverse of repeated addition flexibly. She also had basic understanding of fractions division concept as equal sharing, but when she was challenged with advance problems, she performed poorly. Lala also encountered difficulty when dealing with dividing fraction by fraction problem in which she interpreted it as subtraction problem. In this case, her procedural knowledge was likely to be more salient than her conceptual knowledge.

  10. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  11. Overview of NASA Glenn Research Center's Communications and Intelligent Systems Division

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2016-01-01

    The Communications and Intelligent Systems Division provides expertise, plans, conducts and directs research and engineering development in the competency fields of advanced communications and intelligent systems technologies for application in current and future aeronautics and space systems.

  12. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  13. 49 CFR 177.841 - Division 6.1 and Division 2.3 materials.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Division 6.1 and Division 2.3 materials. 177.841... PUBLIC HIGHWAY Loading and Unloading § 177.841 Division 6.1 and Division 2.3 materials. (See also § 177...) or Division 6.1 (poisonous) materials. The transportation of a Division 2.3 (poisonous gas) or...

  14. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  15. An analysis of file migration in a UNIX supercomputing environment

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1992-01-01

    The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.

  16. Young Kim, PhD | Division of Cancer Prevention

    Cancer.gov

    Young S Kim, PhD, joined the Division of Cancer Prevention at the National Cancer Institute in 1998 as a Program Director who oversees and monitors NCI grants in the area of Nutrition and Cancer. She serves as an expert in nutrition, molecular biology, and genomics as they relate to cancer prevention. Dr. Kim assists with research initiatives that will advance nutritional

  17. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less

  18. Chemistry Division annual progress report for period ending April 30, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poutsma, M.L.; Ferris, L.M.; Mesmer, R.E.

    1993-08-01

    The Chemistry Division conducts basic and applied chemical research on projects important to DOE`s missions in sciences, energy technologies, advanced materials, and waste management/environmental restoration; it also conducts complementary research for other sponsors. The research are arranged according to: coal chemistry, aqueous chemistry at high temperatures and pressures, geochemistry, chemistry of advanced inorganic materials, structure and dynamics of advanced polymeric materials, chemistry of transuranium elements and compounds, chemical and structural principles in solvent extraction, surface science related to heterogeneous catalysis, photolytic transformations of hazardous organics, DNA sequencing and mapping, and special topics.

  19. Physics division annual report 2000.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thayer, K., ed.

    2001-10-04

    of matter impacts the structure of nuclei and extended the exquisite sensitivity of the Atom-Trap-Trace-Analysis technique to new species and applications. All of this progress was built on advances in nuclear theory, which the Division pursues at the quark, hadron, and nuclear collective degrees of freedom levels. These are just a few of the highlights in the Division's research program. The results reflect the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.« less

  20. UNIX security in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1989-01-01

    The author critiques some security mechanisms in most versions of the Unix operating system and suggests more effective tools that either have working prototypes or have been implemented, for example in secure Unix systems. Although no computer (not even a secure one) is impenetrable, breaking into systems with these alternate mechanisms will cost more, require more skill, and be more easily detected than penetrations of systems without these mechanisms. The mechanisms described fall into four classes (with considerable overlap). User authentication at the local host affirms the identity of the person using the computer. The principle of least privilege dictates that properly authenticated users should have rights precisely sufficient to perform their tasks, and system administration functions should be compartmentalized; to this end, access control lists or capabilities should either replace or augment the default Unix protection system, and mandatory access controls implementing multilevel security models and integrity mechanisms should be available. Since most users access supercomputing environments using networks, the third class of mechanisms augments authentication (where feasible). As no security is perfect, the fourth class of mechanism logs events that may indicate possible security violations; this will allow the reconstruction of a successful penetration (if discovered), or possibly the detection of an attempted penetration.

  1. Conceptual Assessment Tool for Advanced Undergraduate Electrodynamics

    ERIC Educational Resources Information Center

    Baily, Charles; Ryan, Qing X.; Astolfi, Cecilia; Pollock, Steven J.

    2017-01-01

    As part of ongoing investigations into student learning in advanced undergraduate courses, we have developed a conceptual assessment tool for upper-division electrodynamics (E&M II): the Colorado UppeR-division ElectrodyNamics Test (CURrENT). This is a free response, postinstruction diagnostic with 6 multipart questions, an optional 3-question…

  2. Supercomputing resources empowering superstack with interactive and integrated systems

    NASA Astrophysics Data System (ADS)

    Rückemann, Claus-Peter

    2012-09-01

    This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.

  3. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

  4. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  5. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  6. Activities of the Structures Division, Lewis Research Center

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose of the NASA Lewis Research Center, Structures Division's 1990 Annual Report is to give a brief, but comprehensive, review of the technical accomplishments of the Division during the past calendar year. The report is organized topically to match the Center's Strategic Plan. Over the years, the Structures Division has developed the technology base necessary for improving the future of aeronautical and space propulsion systems. In the future, propulsion systems will need to be lighter, to operate at higher temperatures and to be more reliable in order to achieve higher performance. Achieving these goals is complex and challenging. Our approach has been to work cooperatively with both industry and universities to develop the technology necessary for state-of-the-art advancement in aeronautical and space propulsion systems. The Structures Division consists of four branches: Structural Mechanics, Fatigue and Fracture, Structural Dynamics, and Structural Integrity. This publication describes the work of the four branches by three topic areas of Research: (1) Basic Discipline; (2) Aeropropulsion; and (3) Space Propulsion. Each topic area is further divided into the following: (1) Materials; (2) Structural Mechanics; (3) Life Prediction; (4) Instruments, Controls, and Testing Techniques; and (5) Mechanisms. The publication covers 78 separate topics with a bibliography containing 159 citations. We hope you will find the publication interesting as well as useful.

  7. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    NASA Astrophysics Data System (ADS)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  8. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    NASA Astrophysics Data System (ADS)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  9. Development of the general interpolants method for the CYBER 200 series of supercomputers

    NASA Technical Reports Server (NTRS)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  10. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Integration of Titan supercomputer at OLCF with ATLAS Production System

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  12. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  13. Lipid Cell Biology: A Focus on Lipids in Cell Division.

    PubMed

    Storck, Elisabeth M; Özbalci, Cagakan; Eggert, Ulrike S

    2018-06-20

    Cells depend on hugely diverse lipidomes for many functions. The actions and structural integrity of the plasma membrane and most organelles also critically depend on membranes and their lipid components. Despite the biological importance of lipids, our understanding of lipid engagement, especially the roles of lipid hydrophobic alkyl side chains, in key cellular processes is still developing. Emerging research has begun to dissect the importance of lipids in intricate events such as cell division. This review discusses how these structurally diverse biomolecules are spatially and temporally regulated during cell division, with a focus on cytokinesis. We analyze how lipids facilitate changes in cellular morphology during division and how they participate in key signaling events. We identify which cytokinesis proteins are associated with membranes, suggesting lipid interactions. More broadly, we highlight key unaddressed questions in lipid cell biology and techniques, including mass spectrometry, advanced imaging, and chemical biology, which will help us gain insights into the functional roles of lipids.

  14. Requirements and Usage of NVM in Advanced Onboard Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Some, R.

    2001-01-01

    This viewgraph presentation gives an overview of the requirements and uses of non-volatile memory (NVM) in advanced onboard data processing systems. Supercomputing in space presents the only viable approach to the bandwidth problem (can't get data down to Earth), controlling constellations of cooperating satellites, reducing mission operating costs, and real-time intelligent decision making and science data gathering. Details are given on the REE vision and impact on NASA and Department of Defense missions, objectives of REE, baseline architecture, and issues. NVM uses and requirements are listed.

  15. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  16. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  17. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  18. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  19. The Office of the Materials Division

    NASA Technical Reports Server (NTRS)

    Ramsey, amanda J.

    2004-01-01

    I was assigned to the Materials Division, which consists of the following branches; the Advanced Metallics Branch/5120-RMM, Ceramics Branch/5130-RMC, Polymers Branch/5150-RMP, and the Durability and Protective Coatings Branch/5160-RMD. Mrs. Pamela Spinosi is my assigned mentor. She was assisted by Ms.Raysa Rodriguez/5100-RM and Mrs.Denise Prestien/5100-RM, who are both employed by InDyne, Inc. My primary assignment this past summer was working directly with Ms. Rodriguez, assisting her with setting up the Integrated Financial Management Program (IFMP) 5130-RMC/Branch procedures and logs. These duties consisted of creating various spreadsheets for each individual branch member, which were updated daily. It was not hard to familiarize myself with these duties since this is my second summer working with Ms Rodriguez at NASA Glenn Research Center. RMC ordering laboratory, supplies and equipment for the Basic Materials Laboratory (Building 106) using the IF'MP/Purchase Card (P-card), a NASA-wide software program. I entered into the IFMP/Travel and Requisitions System, new Travel Authorizations for the 5130-RMC Civil Servant Branch Members. I also entered and completed Travel Vouchers for the 5130-RMC Ceramics Branch. I assisted in the Division Office creating new Emergency Contact list for the Materials Division. I worked with Dr. Hugh Gray, the Division Chief, and Dr. Ajay Misra, the 5130-RMC Branch Chief, on priority action items, with a close deadline, for a large NASA Proposal. Another project was working closely with Ms. Rodriguez in organizing and preparing for Dr. Ajay K. Misra's SESCDP (two year detail). This consisted of organizing files, file folders, personal information, and recording all data material onto CD's and printing all presentations for display in binders. I attended numerous Branch meetings, and observed many changes in the Branch Management organization.

  20. Advanced planetary studies

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Results of planetary advanced studies and planning support provided by Science Applications, Inc. staff members to Earth and Planetary Exploration Division, OSSA/NASA, for the period 1 February 1981 to 30 April 1982 are summarized. The scope of analyses includes cost estimation, planetary missions performance, solar system exploration committee support, Mars program planning, Galilean satellite mission concepts, and advanced propulsion data base. The work covers 80 man-months of research. Study reports and related publications are included in a bibliography section.

  1. Supercomputer description of human lung morphology for imaging analysis.

    PubMed

    Martonen, T B; Hwang, D; Guan, X; Fleming, J S

    1998-04-01

    A supercomputer code that describes the three-dimensional branching structure of the human lung has been developed. The algorithm was written for the Cray C94. In our simulations, the human lung was divided into a matrix containing discrete volumes (voxels) so as to be compatible with analyses of SPECT images. The matrix has 3840 voxels. The matrix can be segmented into transverse, sagittal and coronal layers analogous to human subject examinations. The compositions of individual voxels were identified by the type and respective number of airways present. The code provides a mapping of the spatial positions of the almost 17 million airways in human lungs and unambiguously assigns each airway to a voxel. Thus, the clinician and research scientist in the medical arena have a powerful new tool to be used in imaging analyses. The code was designed to be integrated into diverse applications, including the interpretation of SPECT images, the design of inhalation exposure experiments and the targeted delivery of inhaled pharmacologic drugs.

  2. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  3. The Advanced Software Development and Commercialization Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallopoulos, E.; Canfield, T.R.; Minkoff, M.

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time,more » on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.« less

  4. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  5. Counterair Operations in the Light Infantry Division.

    DTIC Science & Technology

    1985-12-02

    fiO -RI67 6S1 COUNTERAIR OPERATIONS IN THE LIGHT INFANTRY DIVISION In1 (U) ARMY COMMAND AND GENERAL STAFF COLL FORT LERVENHORTH KS SCHOOL OF ADVANCED...concealment will disrupt the enemy pilot’s decision cycle causing him to fail in his mission. The Finns used deception in the Russo -Finnish War, 1939...OiDerations (Draft) (Fort Leavenworth, KS: US Army Command and General Staff College, July 1985), p. 3-58. 67 FM 44-1, p. 4-4. SRCharles A. Robinson, " Russo

  6. Engineering physics and mathematics division

    NASA Astrophysics Data System (ADS)

    Sincovec, R. F.

    1995-07-01

    This report provides a record of the research activities of the Engineering Physics and Mathematics Division for the period 1 Jan. 1993 - 31 Dec. 1994. This report is the final archival record of the EPM Division. On 1 Oct. 1994, ORELA was transferred to Physics Division and on 1 Jan. 1995, the Engineering Physics and Mathematics Division and the Computer Applications Division reorganized to form the Computer Science and Mathematics Division and the Computational Physics and Engineering Division. Earlier reports in this series are identified on the previous pages, along with the progress reports describing ORNL's research in the mathematical sciences prior to 1984 when those activities moved into the Engineering Physics and Mathematics Division.

  7. Technology advances and market forces: Their impact on high performance architectures

    NASA Technical Reports Server (NTRS)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  8. Chemical Technology Division, Annual technical report, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-03-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1991 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removalmore » of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources; chemistry of superconducting oxides and other materials of interest with technological application; interfacial processes of importance to corrosion science, catalysis, and high-temperature superconductivity; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).« less

  9. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level ofmore » treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"« less

  10. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    USGS Publications Warehouse

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  11. 75 FR 16843 - Core Manufacturing, Multi-Plastics, Inc., Division, Sipco, Inc., Division, Including Leased...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ... Manufacturing, Multi-Plastics, Inc., Division, Sipco, Inc., Division, Including Leased Workers of M-Ploy... Manufacturing, Multi-Plastics, Inc., Division and Sipco, Inc., Division, including leased workers of M-Ploy... applicable to TA-W-70,457 is hereby issued as follows: ``All workers of Core Manufacturing, Multi-Plastics...

  12. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  13. Synthesis, hydrolysis rates, supercomputer modeling, and antibacterial activity of bicyclic tetrahydropyridazinones.

    PubMed

    Jungheim, L N; Boyd, D B; Indelicato, J M; Pasini, C E; Preston, D A; Alborn, W E

    1991-05-01

    Bicyclic tetrahydropyridazinones, such as 13, where X are strongly electron-withdrawing groups, were synthesized to investigate their antibacterial activity. These delta-lactams are homologues of bicyclic pyrazolidinones 15, which were the first non-beta-lactam containing compounds reported to bind to penicillin-binding proteins (PBPs). The delta-lactam compounds exhibit poor antibacterial activity despite having reactivity comparable to the gamma-lactams. Molecular modeling based on semiempirical molecular orbital calculations on a Cray X-MP supercomputer, predicted that the reason for the inactivity is steric bulk hindering high affinity of the compounds to PBPs, as well as high conformational flexibility of the tetrahydropyridazinone ring hampering effective alignment of the molecule in the active site. Subsequent PBP binding experiments confirmed that this class of compound does not bind to PBPs.

  14. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  15. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  16. Cell Division Synchronization

    DTIC Science & Technology

    The report summarizes the progress in the design and construction of automatic equipment for synchronizing cell division in culture by periodic...Concurrent experiments in hypothermic synchronization of algal cell division are reported.

  17. Symmetric vs. Asymmetric Stem Cell Divisions: An Adaptation against Cancer?

    PubMed Central

    Shahriyari, Leili; Komarova, Natalia L.

    2013-01-01

    Traditionally, it has been held that a central characteristic of stem cells is their ability to divide asymmetrically. Recent advances in inducible genetic labeling provided ample evidence that symmetric stem cell divisions play an important role in adult mammalian homeostasis. It is well understood that the two types of cell divisions differ in terms of the stem cells' flexibility to expand when needed. On the contrary, the implications of symmetric and asymmetric divisions for mutation accumulation are still poorly understood. In this paper we study a stochastic model of a renewing tissue, and address the optimization problem of tissue architecture in the context of mutant production. Specifically, we study the process of tumor suppressor gene inactivation which usually takes place as a consequence of two “hits”, and which is one of the most common patterns in carcinogenesis. We compare and contrast symmetric and asymmetric (and mixed) stem cell divisions, and focus on the rate at which double-hit mutants are generated. It turns out that symmetrically-dividing cells generate such mutants at a rate which is significantly lower than that of asymmetrically-dividing cells. This result holds whether single-hit (intermediate) mutants are disadvantageous, neutral, or advantageous. It is also independent on whether the carcinogenic double-hit mutants are produced only among the stem cells or also among more specialized cells. We argue that symmetric stem cell divisions in mammals could be an adaptation which helps delay the onset of cancers. We further investigate the question of the optimal fraction of stem cells in the tissue, and quantify the contribution of non-stem cells in mutant production. Our work provides a hypothesis to explain the observation that in mammalian cells, symmetric patterns of stem cell division seem to be very common. PMID:24204602

  18. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  19. Simulating functional magnetic materials on supercomputers.

    PubMed

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram.

  20. The complexity of divisibility.

    PubMed

    Bausch, Johannes; Cubitt, Toby

    2016-09-01

    We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.

  1. Supercomputers ready for use as discovery machines for neuroscience.

    PubMed

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  2. Supercomputers Ready for Use as Discovery Machines for Neuroscience

    PubMed Central

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998

  3. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    PubMed

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  4. The transition of a real-time single-rotor helicopter simulation program to a supercomputer

    NASA Technical Reports Server (NTRS)

    Martinez, Debbie

    1995-01-01

    This report presents the conversion effort and results of a real-time flight simulation application transition to a CONVEX supercomputer. Enclosed is a detailed description of the conversion process and a brief description of the Langley Research Center's (LaRC) flight simulation application program structure. Currently, this simulation program may be configured to represent Sikorsky S-61 helicopter (a five-blade, single-rotor, commercial passenger-type helicopter) or an Army Cobra helicopter (either the AH-1 G or AH-1 S model). This report refers to the Sikorsky S-61 simulation program since it is the most frequently used configuration.

  5. Deconstructing Calculation Methods, Part 4: Division

    ERIC Educational Resources Information Center

    Thompson, Ian

    2008-01-01

    In the final article of a series of four, the author deconstructs the primary national strategy's approach to written division. The approach to division is divided into five stages: (1) mental division using partition; (2) short division of TU / U; (3) "expanded" method for HTU / U; (4) short division of HTU / U; and (5) long division.…

  6. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  7. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Fang, Xiang; Li, Ning-qiu; Fu, Xiao-zhe; Li, Kai-bin; Lin, Qiang; Liu, Li-hui; Shi, Cun-bin; Wu, Shu-qin

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.

  8. Preparing for in situ processing on upcoming leading-edge supercomputers

    DOE PAGES

    Kress, James; Churchill, Randy Michael; Klasky, Scott; ...

    2016-10-01

    High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less

  9. Adventures in supercomputing: An innovative program for high school teachers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G.

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials ismore » remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).« less

  10. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  11. Advanced Numerical Techniques of Performance Evaluation. Volume 1

    DTIC Science & Technology

    1990-06-01

    system scheduling3thread. The scheduling thread then runs any other ready thread that can be found. A thread can only sleep or switch out on itself...Polychronopoulos and D.J. Kuck. Guided Self- Scheduling : A Practical Scheduling Scheme for Parallel Supercomputers. IEEE Transactions on Computers C...Kuck 1987] C.D. Polychronopoulos and D.J. Kuck. Guided Self- Scheduling : A Practical Scheduling Scheme for Parallel Supercomputers. IEEE Trans. on Comp

  12. ER-mitochondria contacts couple mtDNA synthesis with mitochondrial division in human cells.

    PubMed

    Lewis, Samantha C; Uchiyama, Lauren F; Nunnari, Jodi

    2016-07-15

    Mitochondrial DNA (mtDNA) encodes RNAs and proteins critical for cell function. In human cells, hundreds to thousands of mtDNA copies are replicated asynchronously, packaged into protein-DNA nucleoids, and distributed within a dynamic mitochondrial network. The mechanisms that govern how nucleoids are chosen for replication and distribution are not understood. Mitochondrial distribution depends on division, which occurs at endoplasmic reticulum (ER)-mitochondria contact sites. These sites were spatially linked to a subset of nucleoids selectively marked by mtDNA polymerase and engaged in mtDNA synthesis--events that occurred upstream of mitochondrial constriction and division machine assembly. Our data suggest that ER tubules proximal to nucleoids are necessary but not sufficient for mtDNA synthesis. Thus, ER-mitochondria contacts coordinate licensing of mtDNA synthesis with division to distribute newly replicated nucleoids to daughter mitochondria. Copyright © 2016, American Association for the Advancement of Science.

  13. A novel VLSI processor architecture for supercomputing arrays

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  14. Gear systems for advanced turboprops

    NASA Technical Reports Server (NTRS)

    Wagner, Douglas A.

    1987-01-01

    A new generation of transport aircraft will be powered by efficient, advanced turboprop propulsion systems. Systems that develop 5,000 to 15,000 horsepower have been studied. Reduction gearing for these advanced propulsion systems is discussed. Allison Gas Turbine Division's experience with the 5,000 horsepower reduction gearing for the T56 engine is reviewed and the impact of that experience on advanced gear systems is considered. The reliability needs for component design and development are also considered. Allison's experience and their research serve as a basis on which to characterize future gear systems that emphasize low cost and high reliability.

  15. Instrumentation and Controls Division Progress Report for the Period of July 1, 1994 to December 31, 1997: Publications, Presentations, Activities, and Awards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, D.W.

    This report contains a record of publishing and other activities in the Oak Ridge National Laboratory (ORNL) Instrumentation and Controls (I&C) Division for the period of July 1, 1994, to December31, 1997. It is a companion volume to Working Together on New Horizons: Instrumentation and Controls Division Progress Report for the Period of July 1, 1994, to December 31, 1997 (OR.NLA4-6530). Working Together on New Horizons contains illustrated summaries of some of the projects under way in I&C Division. Both books can be obtained by contacting C. R. Brittain (brittain@ornl. gov), P.O. Box 2008, Oak Ridge, TN 37831-6005. l&C Divisionmore » Mission and Vision I&C Division develops and maintains techniques, instruments, and systems that lead to a better understanding of nature and harnessing of natural phenomena for the benefit of humankind. We have dedicated ourselves to accelerating the advancement of science and the transfer of those advancements into products and processes that benefit U.S. industry and enhance the security of our citizens.« less

  16. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    PubMed

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  17. ICON-MIC: Implementing a CPU/MIC Collaboration Parallel Framework for ICON on Tianhe-2 Supercomputer.

    PubMed

    Wang, Zihao; Chen, Yu; Zhang, Jingrong; Li, Lun; Wan, Xiaohua; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2018-03-01

    Electron tomography (ET) is an important technique for studying the three-dimensional structures of the biological ultrastructure. Recently, ET has reached sub-nanometer resolution for investigating the native and conformational dynamics of macromolecular complexes by combining with the sub-tomogram averaging approach. Due to the limited sampling angles, ET reconstruction typically suffers from the "missing wedge" problem. Using a validation procedure, iterative compressed-sensing optimized nonuniform fast Fourier transform (NUFFT) reconstruction (ICON) demonstrates its power in restoring validated missing information for a low-signal-to-noise ratio biological ET dataset. However, the huge computational demand has become a bottleneck for the application of ICON. In this work, we implemented a parallel acceleration technology ICON-many integrated core (MIC) on Xeon Phi cards to address the huge computational demand of ICON. During this step, we parallelize the element-wise matrix operations and use the efficient summation of a matrix to reduce the cost of matrix computation. We also developed parallel versions of NUFFT on MIC to achieve a high acceleration of ICON by using more efficient fast Fourier transform (FFT) calculation. We then proposed a hybrid task allocation strategy (two-level load balancing) to improve the overall performance of ICON-MIC by making full use of the idle resources on Tianhe-2 supercomputer. Experimental results using two different datasets show that ICON-MIC has high accuracy in biological specimens under different noise levels and a significant acceleration, up to 13.3 × , compared with the CPU version. Further, ICON-MIC has good scalability efficiency and overall performance on Tianhe-2 supercomputer.

  18. 78 FR 54255 - HRSA's Bureau of Health Professions Advanced Education Nursing Traineeship Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-03

    ... of Health Professions Advanced Education Nursing Traineeship Program AGENCY: Health Resources and... announcing a change to its Advanced Education Nursing Traineeship (AENT) program. Effective fiscal year (FY... Wasserman, DrPH, RN, Advanced Nursing Education Branch Chief, Division of Nursing, Bureau of Health...

  19. Introduction to the special issue: 50th anniversary of APA Division 28: The past, present, and future of psychopharmacology and substance abuse.

    PubMed

    Stoops, William W; Sigmon, Stacey C; Evans, Suzette M

    2016-08-01

    This is an introduction to the special issue "50th Anniversary of APA Division 28: The Past, Present, and Future of Psychopharmacology and Substance Abuse." Taken together, the scholarly contributions included in this special issue serve as a testament to the important work conducted by our colleagues over the past five decades. Division 28 and its members have advanced and disseminated knowledge on the behavioral effects of drugs, informed efforts to prevent and treat substance abuse, and influenced education and policy issues more generally. As past and current leaders of the division, we are excited to celebrate 50 years of Division 28 and look forward to many more successful decades for our division and its members. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    PubMed

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  1. High Resolution Aerospace Applications using the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha

    2005-01-01

    This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

  2. Advances in petascale kinetic plasma simulation with VPIC and Roadrunner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Kevin J; Albright, Brian J; Yin, Lin

    2009-01-01

    VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration andmore » modeling reconnection in magnetic confinement fusion experiments.« less

  3. Biorepositories | Division of Cancer Prevention

    Cancer.gov

    Carefully collected and controlled high-quality human biospecimens, annotated with clinical data and properly consented for investigational use, are available through the Division of Cancer Prevention Biorepositories listed in the charts below. Biorepositories Managed by the Division of Cancer Prevention Biorepositories Supported by the Division of Cancer Prevention Related

  4. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Postdoc Forum Research Highlights Awards Publications Database Events Calendar Newsletter Archive People ; Finance Templates Travel One-Stop Investigators Division Staff Facilities and Centers Staff Jobs People Division, please use the links here. An outline of the Division structure is available at the Organization

  5. NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1992-01-01

    The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.

  6. Imaging mycobacterial growth and division with a fluorogenic probe.

    PubMed

    Hodges, Heather L; Brown, Robert A; Crooks, John A; Weibel, Douglas B; Kiessling, Laura L

    2018-05-15

    Control and manipulation of bacterial populations requires an understanding of the factors that govern growth, division, and antibiotic action. Fluorescent and chemically reactive small molecule probes of cell envelope components can visualize these processes and advance our knowledge of cell envelope biosynthesis (e.g., peptidoglycan production). Still, fundamental gaps remain in our understanding of the spatial and temporal dynamics of cell envelope assembly. Previously described reporters require steps that limit their use to static imaging. Probes that can be used for real-time imaging would advance our understanding of cell envelope construction. To this end, we synthesized a fluorogenic probe that enables continuous live cell imaging in mycobacteria and related genera. This probe reports on the mycolyltransferases that assemble the mycolic acid membrane. This peptidoglycan-anchored bilayer-like assembly functions to protect these cells from antibiotics and host defenses. Our probe, quencher-trehalose-fluorophore (QTF), is an analog of the natural mycolyltransferase substrate. Mycolyltransferases process QTF by diverting their normal transesterification activity to hydrolysis, a process that unleashes fluorescence. QTF enables high contrast continuous imaging and the visualization of mycolyltransferase activity in cells. QTF revealed that mycolyltransferase activity is augmented before cell division and localized to the septa and cell poles, especially at the old pole. This observed localization suggests that mycolyltransferases are components of extracellular cell envelope assemblies, in analogy to the intracellular divisomes and polar elongation complexes. We anticipate QTF can be exploited to detect and monitor mycobacteria in physiologically relevant environments.

  7. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  8. ALCF Data Science Program: Productive Data-centric Supercomputing

    NASA Astrophysics Data System (ADS)

    Romero, Nichols; Vishwanath, Venkatram

    The ALCF Data Science Program (ADSP) is targeted at big data science problems that require leadership computing resources. The goal of the program is to explore and improve a variety of computational methods that will enable data-driven discoveries across all scientific disciplines. The projects will focus on data science techniques covering a wide area of discovery including but not limited to uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows. Project teams will be among the first to access Theta, ALCFs forthcoming 8.5 petaflops Intel/Cray system. The program will transition to the 200 petaflop/s Aurora supercomputing system when it becomes available. In 2016, four projects have been selected to kick off the ADSP. The selected projects span experimental and computational sciences and range from modeling the brain to discovering new materials for solar-powered windows to simulating collision events at the Large Hadron Collider (LHC). The program will have a regular call for proposals with the next call expected in Spring 2017.http://www.alcf.anl.gov/alcf-data-science-program This research used resources of the ALCF, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

  9. 49 CFR 175.630 - Special requirements for Division 6.1 (poisonous) material and Division 6.2 (infectious...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Special requirements for Division 6.1 (poisonous) material and Division 6.2 (infectious substances) materials. 175.630 Section 175.630 Transportation Other... Classification of Material § 175.630 Special requirements for Division 6.1 (poisonous) material and Division 6.2...

  10. Division Quilts: A Measurement Model

    ERIC Educational Resources Information Center

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  11. NASA Planetary Science Division's Instrument Development Programs, PICASSO and MatISSE

    NASA Technical Reports Server (NTRS)

    Gaier, James R.

    2016-01-01

    The Planetary Science Division (PSD) has combined several legacy instrument development programs into just two. The Planetary Instrument Concepts Advancing Solar System Observations (PICASSO) program funds the development of low TRL instruments and components. The Maturation of Instruments for Solar System Observations (MatISSE) program funds the development of instruments in the mid-TRL range. The strategy of PSD instrument development is to develop instruments from PICASSO to MatISSE to proposing for mission development.

  12. Recent advances and future prospects for Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less

  13. 78 FR 46621 - Status of the Office of New Reactors' Implementation of Electronic Distribution of Advanced...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-01

    ... of Electronic Distribution of Advanced Reactor Correspondence AGENCY: Nuclear Regulatory Commission. ACTION: Implementation of electronic distribution of advanced reactor correspondence; issuance. SUMMARY... public that, in the future, publicly available correspondence originating from the Division of Advanced...

  14. Physics division annual report 2005.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glover, J.; Physics

    2007-03-12

    isotopes were trapped in an atom trap for the first time, a major milestone in an innovative search for the violation of time-reversal symmetry. New results from HERMES establish that strange quarks carry little of the spin of the proton and precise results have been obtained at JLAB on the changes in quark distributions in light nuclei. New theoretical results reveal that the nature of the surfaces of strange quark stars. Green's function Monte Carlo techniques have been extended to scattering problems and show great promise for the accurate calculation, from first principles, of important astrophysical reactions. Flame propagation in type 1A supernova has been simulated, a numerical process that requires considering length scales that vary by factors of eight to twelve orders of magnitude. Argonne continues to lead in the development and exploitation of the new technical concepts that will truly make an advanced exotic beam facility, in the words of NSAC, 'the world-leading facility for research in nuclear structure and nuclear astrophysics'. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for these new capabilities hold the keys to unlocking important secrets of nature. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.« less

  15. Comparison of Concussion Rates Between NCAA Division I and Division III Men's and Women's Ice Hockey Players.

    PubMed

    Rosene, John M; Raksnis, Bryan; Silva, Brie; Woefel, Tyler; Visich, Paul S; Dompier, Thomas P; Kerr, Zachary Y

    2017-09-01

    Examinations related to divisional differences in the incidence of sports-related concussions (SRC) in collegiate ice hockey are limited. To compare the epidemiologic patterns of concussion in National Collegiate Athletic Association (NCAA) ice hockey by sex and division. Descriptive epidemiology study. A convenience sample of men's and women's ice hockey teams in Divisions I and III provided SRC data via the NCAA Injury Surveillance Program during the 2009-2010 to 2014-2015 academic years. Concussion counts, rates, and distributions were examined by factors including injury activity and position. Injury rate ratios (IRRs) and injury proportion ratios (IPRs) with 95% confidence intervals (CIs) were used to compare concussion rates and distributions, respectively. Overall, 415 concussions were reported for men's and women's ice hockey combined. The highest concussion rate was found in Division I men (0.83 per 1000 athlete-exposures [AEs]), followed by Division III women (0.78/1000 AEs), Division I women (0.65/1000 AEs), and Division III men (0.64/1000 AEs). However, the only significant IRR was that the concussion rate was higher in Division I men than Division III men (IRR = 1.29; 95% CI, 1.02-1.65). The proportion of concussions from checking was higher in men than women (28.5% vs 9.4%; IPR = 3.02; 95% CI, 1.63-5.59); however, this proportion was higher in Division I women than Division III women (18.4% vs 1.8%; IPR = 10.47; 95% CI, 1.37-79.75). The proportion of concussions sustained by goalkeepers was higher in women than men (14.2% vs 2.9%; IPR = 4.86; 95% CI, 2.19-10.77), with findings consistent within each division. Concussion rates did not vary by sex but differed by division among men. Checking-related concussions were less common in women than men overall but more common in Division I women than Division III women. Findings highlight the need to better understand the reasons underlying divisional differences within men's and women's ice hockey and the

  16. Stem cell divisions, somatic mutations, cancer etiology, and cancer prevention.

    PubMed

    Tomasetti, Cristian; Li, Lu; Vogelstein, Bert

    2017-03-24

    Cancers are caused by mutations that may be inherited, induced by environmental factors, or result from DNA replication errors (R). We studied the relationship between the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries throughout the world. The data revealed a strong correlation (median = 0.80) between cancer incidence and normal stem cell divisions in all countries, regardless of their environment. The major role of R mutations in cancer etiology was supported by an independent approach, based solely on cancer genome sequencing and epidemiological data, which suggested that R mutations are responsible for two-thirds of the mutations in human cancers. All of these results are consistent with epidemiological estimates of the fraction of cancers that can be prevented by changes in the environment. Moreover, they accentuate the importance of early detection and intervention to reduce deaths from the many cancers arising from unavoidable R mutations. Copyright © 2017, American Association for the Advancement of Science.

  17. Effects of Polyhydroxybutyrate Production on Cell Division

    NASA Technical Reports Server (NTRS)

    Miller, Kathleen; Rahman, Asif; Hadi, Masood Z.

    2015-01-01

    Synthetic biological engineering can be utilized to aide the advancement of improved long-term space flight. The potential to use synthetic biology as a platform to biomanufacture desired equipment on demand using the three dimensional (3D) printer on the International Space Station (ISS) gives long-term NASA missions the flexibility to produce materials as needed on site. Polyhydroxybutyrates (PHBs) are biodegradable, have properties similar to plastics, and can be produced in Escherichia coli using genetic engineering. Using PHBs during space flight could assist mission success by providing a valuable source of biomaterials that can have many potential applications, particularly through 3D printing. It is well documented that during PHB production E. coli cells can become significantly elongated. The elongation of cells reduces the ability of the cells to divide and thus to produce PHB. I aim to better understand cell division during PHB production, through the design, building, and testing of synthetic biological circuits, and identify how to potentially increase yields of PHB with FtsZ overexpression, the gene responsible for cell division. Ultimately, an increase in the yield will allow more products to be created using the 3D printer on the ISS and beyond, thus aiding astronauts in their missions.

  18. New tools for investigating student learning in upper-division electrostatics

    NASA Astrophysics Data System (ADS)

    Wilcox, Bethany R.

    Student learning in upper-division physics courses is a growing area of research in the field of Physics Education. Developing effective new curricular materials and pedagogical techniques to improve student learning in upper-division courses requires knowledge of both what material students struggle with and what curricular approaches help to overcome these struggles. To facilitate the course transformation process for one specific content area --- upper-division electrostatics --- this thesis presents two new methodological tools: (1) an analytical framework designed to investigate students' struggles with the advanced physics content and mathematically sophisticated tools/techniques required at the junior and senior level, and (2) a new multiple-response conceptual assessment designed to measure student learning and assess the effectiveness of different curricular approaches. We first describe the development and theoretical grounding of a new analytical framework designed to characterize how students use mathematical tools and techniques during physics problem solving. We apply this framework to investigate student difficulties with three specific mathematical tools used in upper-division electrostatics: multivariable integration in the context of Coulomb's law, the Dirac delta function in the context of expressing volume charge densities, and separation of variables as a technique to solve Laplace's equation. We find a number of common themes in students' difficulties around these mathematical tools including: recognizing when a particular mathematical tool is appropriate for a given physics problem, mapping between the specific physical context and the formal mathematical structures, and reflecting spontaneously on the solution to a physics problem to gain physical insight or ensure consistency with expected results. We then describe the development of a novel, multiple-response version of an existing conceptual assessment in upper-division electrostatics

  19. Polarized Cell Division of Chlamydia trachomatis

    PubMed Central

    Abdelrahman, Yasser; Ouellette, Scot P.; Belland, Robert J.; Cox, John V.

    2016-01-01

    Bacterial cell division predominantly occurs by a highly conserved process, termed binary fission, that requires the bacterial homologue of tubulin, FtsZ. Other mechanisms of bacterial cell division that are independent of FtsZ are rare. Although the obligate intracellular human pathogen Chlamydia trachomatis, the leading bacterial cause of sexually transmitted infections and trachoma, lacks FtsZ, it has been assumed to divide by binary fission. We show here that Chlamydia divides by a polarized cell division process similar to the budding process of a subset of the Planctomycetes that also lack FtsZ. Prior to cell division, the major outer-membrane protein of Chlamydia is restricted to one pole of the cell, and the nascent daughter cell emerges from this pole by an asymmetric expansion of the membrane. Components of the chlamydial cell division machinery accumulate at the site of polar growth prior to the initiation of asymmetric membrane expansion and inhibitors that disrupt the polarity of C. trachomatis prevent cell division. The polarized cell division of C. trachomatis is the result of the unipolar growth and FtsZ-independent fission of this coccoid organism. This mechanism of cell division has not been documented in other human bacterial pathogens suggesting the potential for developing Chlamydia-specific therapeutic treatments. PMID:27505160

  20. Division of Environmental Health

    Science.gov Websites

    Environmental Conservation Alaska Department of Environmental Conservation Division of Environmental Health Pesticides Applicator Certification & Training Product Registration Pesticide-Use Permits Factsheets & You are here: DEC / Division of Environmental Health All DEC offices will be closed to the public on

  1. Laboratory Astrophysics Division of The AAS (LAD)

    NASA Astrophysics Data System (ADS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-10-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  2. Laboratory Astrophysics Division of the AAS (LAD)

    NASA Technical Reports Server (NTRS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-01-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  3. Division of Forestry Information

    Science.gov Websites

    Natural Resources / Division of Forestry Division of Forestry Information Fire Information Links Menu Fire Home Fire Overview Burn Permits Current Fire Information Become an Alaskan Firewise Community Fire Department of Natural Resources - Public Information Center DNR Media Releases Public Information Center

  4. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    NASA Astrophysics Data System (ADS)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  5. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  6. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  7. 49 CFR 173.244 - Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards...), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards (Division 6.1...

  8. 49 CFR 173.244 - Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards...), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards (Division 6.1...

  9. 49 CFR 173.244 - Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards...), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards (Division 6.1...

  10. 49 CFR 173.244 - Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 2 2012-10-01 2012-10-01 false Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards...), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards (Division 6.1...

  11. 49 CFR 173.244 - Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards...), dangerous when wet (Division 4.3) materials, and poisonous liquids with inhalation hazards (Division 6.1...

  12. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  13. 49 CFR 1242.03 - Made by accounting divisions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Made by accounting divisions. 1242.03 Section 1242... accounting divisions. The separation shall be made by accounting divisions, where such divisions are maintained, and the aggregate of the accounting divisions reported for the quarter and for the year. ...

  14. A New Division of Labor: Meeting America’s Security Challenges Beyond Iraq

    DTIC Science & Technology

    2007-01-01

    on North Korea’s nuclear capabilities, see Graham and Kessler (2005). 20 A New Division of Labor most populous country, China has over the past... Seth Jones, Rollie Lal, Andrew Rathmell, and Anga Timilsina, America’s Role in Nation-Building: From Germany to Iraq, Santa Monica, Calif.: RAND...Graham, Bradley, and Glenn Kessler , “N. Korea Nuclear Advance is Cited,” Washington Post, April 29, 2005, p. 1. Bibliography 107 Headquarters

  15. 2017 T Division Lightning Talks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Marilyn Leann; Abeywardhana, Jayalath AMM; Adams, Colin Mackenzie

    All members of the T Division Community, students, staff members, group leaders, division management, and other interested individuals are invited to come and support the following student(s) as they present their Lightning Talks.

  16. Division: The Sleeping Dragon

    ERIC Educational Resources Information Center

    Watson, Anne

    2012-01-01

    Of the four mathematical operators, division seems to not sit easily for many learners. Division is often described as "the odd one out". Pupils develop coping strategies that enable them to "get away with it". So, problems, misunderstandings, and misconceptions go unresolved perhaps for a lifetime. Why is this? Is it a case of "out of sight out…

  17. Fostering Remainder Understanding in Fraction Division

    ERIC Educational Resources Information Center

    Zembat, Ismail O.

    2017-01-01

    Most students can follow this simple procedure for division of fractions: "Ours is not to reason why, just invert and multiply." But how many really understand what division of fractions means--especially fraction division with respect to the meaning of the remainder. The purpose of this article is to provide an instructional method as a…

  18. Chemical Technology Division, Annual technical report, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-03-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1991 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removalmore » of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources; chemistry of superconducting oxides and other materials of interest with technological application; interfacial processes of importance to corrosion science, catalysis, and high-temperature superconductivity; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).« less

  19. Podcast: The Electronic Crimes Division

    EPA Pesticide Factsheets

    Sept 26, 2016. Chris Lukas, the Special Agent in Charge of the Electronic Crimes Division within the OIG's Office of Investigations talks about computer forensics, cybercrime in the EPA and his division's role in criminal investigations.

  20. 2016 T Division Lightning Talks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Marilyn Leann; Adams, Luke Clyde; Ferre, Gregoire Robing

    These are the slides for all of the 2016 T Division lightning talks. There are 350 pages worth of slides from different presentations, all of which cover different topics within the theoretical division at Los Alamos National Laboratory (LANL).

  1. Positive Factors Influencing the Advancement of Women to the Role of Head Athletic Trainer in the National Collegiate Athletic Association Divisions II and III.

    PubMed

    Mazerolle, Stephanie M; Eason, Christianne M

    2016-07-01

    Research suggests that women do not pursue leadership positions in athletic training due to a variety of reasons, including family challenges, organizational constraints, and reluctance to hold the position. The literature has been focused on the National Collegiate Athletic Association Division I setting, limiting our full understanding. To examine factors that help women as they worked toward the position of head athletic trainer. Qualitative study. Divisions II and III. Seventy-seven women who were employed as head athletic trainers at the Division II or III level participated in our study. Participants were 38 ± 9 (range = 24-57) years old and had an average of 14 ± 8 (range = 1-33) years of athletic training experience. We conducted online interviews. Participants journaled their reflections to a series of open-ended questions pertaining to their experiences as head athletic trainers. Data were analyzed using a general inductive approach. Credibility was secured by peer review and researcher triangulation. Three organizational facilitators emerged from the data, workplace atmosphere, mentors, and past work experiences. These organizational factors were directly tied to aspects within the athletic trainer's employment setting that allowed her to enter the role. One individual-level facilitator was found: personal attributes that were described as helpful for women in transitioning to the role of the head athletic trainer. Participants discussed being leaders and persisting toward their career goals. Women working in Divisions II and III experience similar facilitators to assuming the role of head athletic trainer as those working in the Division I setting. Divisions II and III were viewed as more favorable for women seeking the role of head athletic trainer, but like those in the role in the Division I setting, women must have leadership skills.

  2. Bulletin of the Division of Electrical Engineering, 1987-1988, volume 3, number 2

    NASA Astrophysics Data System (ADS)

    1988-05-01

    A report is provided on the activities of the Division of Electrical Engineering of the National Research Council of Canada. The Division engages in the development of standards and test procedures, and undertakes applied research in support of Canadian industry, government departments, and universities. Technology transfer and collaborative research continue to grow in importance as focuses of Division activities. The Division is comprised of three sections: the Laboratory for Biomedical Engineering, the Laboratory for Electromagnetic and Power Engineering, and the Laboratory for Intelligent Systems. An agreement has been reached to commercially exploit the realtime multiprocessor operating system Harmony. The dielectrics group has made contract research agreements with industry from both Canada and the United States. The possibility of employing a new advanced laser vision camera, which can be mounted on a robot arm in a variety of industrial applications is being explored. Potential short-term spinoffs related to intelligent wheelchairs are being sought as part of the new interlaboratory program which has as its long-term objective the development of a mobile robot for health care applications. A program in applied artificial intelligence has been established. Initiatives in collaboration with outside groups include proposals for major institutes in areas ranging from police and security research to rehabilitation research, programs to enhance Canadian industrial competence working with the Canadian Manufacturers' Association and other government departments, and approaches to the utilization of existing facilities which will make them more valuable without significant financial expenditures.

  3. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    1999-01-01

    The Structures and Acoustics Division of NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported are a synopsis of the work and accomplishments reported by the Division during the 1996 calendar year. A bibliography containing 42 citations is provided.

  4. Lightning Talks 2015: Theoretical Division

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shlachter, Jack S.

    2015-11-25

    This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.

  5. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    NASA Astrophysics Data System (ADS)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  6. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  7. Early Childhood Special Education and Early Intervention Personnel Preparation Standards of the Division for Early Childhood: Field Validation

    ERIC Educational Resources Information Center

    Cochran, Deborah C.; Gallagher, Peggy A.; Stayton, Vicki D.; Dinnebeil, Laurie A.; Lifter, Karin; Chandler, Lynette K.; Christensen, Kimberly A.

    2012-01-01

    Results of the field validation survey of the revised initial and new advanced Council for Exceptional Children (CEC) Division for Early Childhood (DEC) early childhood special education (ECSE)/early intervention (EI) personnel standards are presented. Personnel standards are used as part of educational accountability systems and in teacher…

  8. Best Practices Case Study: Pulte Homes and Communities of Del Webb, Las Vegas Division

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-10-01

    Case study of Pulte Homes Las Vegas Division, who certified nearly 1,200 homes to the DOE Builders Challenge between 2008 and 2012. All of the homes by Las Vegas’ biggest builder achieved HERS scores of lower than 70, and many floor plans got down to the mid 50s, with ducts located in sealed attics insulated along the roof line, advanced framing, and extra attention to air sealing.

  9. Phase division multiplexed EIT for enhanced temporal resolution.

    PubMed

    Dowrick, T; Holder, D

    2018-03-29

    The most commonly used EIT paradigm (time division multiplexing) limits the temporal resolution of impedance images due to the need to switch between injection electrodes. Advances have previously been made using frequency division multiplexing (FDM) to increase temporal resolution, but in cases where a fixed range of frequencies is available, such as imaging fast neural activity, an upper limit is placed on the total number of simultaneous injections. The use of phase division multiplexing (PDM) where multiple out of phase signals can be injected at each frequency is investigated to increase temporal resolution. TDM, FDM and PDM were compared in head tank experiments, to compare transfer impedance measurements and spatial resolution between the three techniques. A resistor phantom paradigm was established to investigate the imaging of one-off impedance changes, of magnitude 1% and with durations as low as 500 µs (similar to those seen in nerve bundles), using both PDM and TDM approaches. In head tank experiments, a strong correlation (r  >  0.85 and p  <  0.001) was present between the three sets of measured transfer impedances, and no statistically significant difference was found in reconstructed image quality. PDM was able to image impedance changes down to 500 µs in the phantom experiments, while the minimum duration imaged using TDM was 5 ms. PDM offers a possible solution to the imaging of fast moving impedance changes (such as in nerves), where the use of triggering or coherent averaging is not possible. The temporal resolution presents an order of magnitude improvement of the TDM approach, and the approach addresses the limited spatial resolution of FDM by increasing the number of simultaneous EIT injections.

  10. 75 FR 15443 - Advancing the Development of Diagnostic Tests and Biomarkers for Tuberculosis; Public Workshop...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ...] Advancing the Development of Diagnostic Tests and Biomarkers for Tuberculosis; Public Workshop; Request for... workshop entitled ``Advancing the Development of Diagnostic Tests and Biomarkers for Tuberculosis (TB... Tuberculosis in the United States, Committee on the Elimination of Tuberculosis in the United States, Division...

  11. Division of energy biosciences: Annual report and summaries of FY 1995 activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-04-01

    The mission of the Division of Energy Biosciences is to support research that advances the fundamental knowledge necessary for the future development of biotechnologies related to the Department of Energy`s mission. The departmental civilian objectives include effective and efficient energy production, energy conservation, environmental restoration, and waste management. The Energy Biosciences program emphasizes research in the microbiological and plant sciences, as these understudied areas offer numerous scientific opportunities to dramatically influence environmentally sensible energy production and conservation. The research supported is focused on the basic mechanisms affecting plant productivity, conversion of biomass and other organic materials into fuels and chemicalsmore » by microbial systems, and the ability of biological systems to replace energy-intensive or pollutant-producing processes. The Division also addresses the increasing number of new opportunities arising at the interface of biology with other basic energy-related sciences such as biosynthesis of novel materials and the influence of soil organisms on geological processes.« less

  12. Simulating the Dynamics of Earth's Core: Using NCCS Supercomputers Speeds Calculations

    NASA Technical Reports Server (NTRS)

    2002-01-01

    If one wanted to study Earth's core directly, one would have to drill through about 1,800 miles of solid rock to reach liquid core-keeping the tunnel from collapsing under pressures that are more than 1 million atmospheres and then sink an instrument package to the bottom that could operate at 8,000 F with 10,000 tons of force crushing every square inch of its surface. Even then, several of these tunnels would probably be needed to obtain enough data. Faced with difficult or impossible tasks such as these, scientists use other available sources of information - such as seismology, mineralogy, geomagnetism, geodesy, and, above all, physical principles - to derive a model of the core and, study it by running computer simulations. One NASA researcher is doing just that on NCCS computers. Physicist and applied mathematician Weijia Kuang, of the Space Geodesy Branch, and his collaborators at Goddard have what he calls the,"second - ever" working, usable, self-consistent, fully dynamic, three-dimensional geodynamic model (see "The Geodynamic Theory"). Kuang runs his model simulations on the supercomputers at the NCCS. He and Jeremy Bloxham, of Harvard University, developed the original version, written in Fortran 77, in 1996.

  13. Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert

    2006-01-01

    It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.

  14. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helland, B.; Summers, B.G.

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. Themore » data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.« less

  15. Anticrossproducts and cross divisions.

    PubMed

    de Leva, Paolo

    2008-01-01

    This paper defines, in the context of conventional vector algebra, the concept of anticrossproduct and a family of simple operations called cross or vector divisions. It is impossible to solve for a or b the equation axb=c, where a and b are three-dimensional space vectors, and axb is their cross product. However, the problem becomes solvable if some "knowledge about the unknown" (a or b) is available, consisting of one of its components, or the angle it forms with the other operand of the cross product. Independently of the selected reference frame orientation, the known component of a may be parallel to b, or vice versa. The cross divisions provide a compact and insightful symbolic representation of a family of algorithms specifically designed to solve problems of such kind. A generalized algorithm was also defined, incorporating the rules for selecting the appropriate kind of cross division, based on the type of input data. Four examples of practical application were provided, including the computation of the point of application of a force and the angular velocity of a rigid body. The definition and geometrical interpretation of the cross divisions stemmed from the concept of anticrossproduct. The "anticrossproducts of axb" were defined as the infinitely many vectors x(i) such that x(i)xb=axb.

  16. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less

  17. Technological Education for the Rural Community (TERC) Project: Technical Mathematics for the Advanced Manufacturing Technician

    ERIC Educational Resources Information Center

    McCormack, Sherry L.; Zieman, Stuart

    2017-01-01

    Hopkinsville Community College's Technological Education for the Rural Community (TERC) project is funded through the National Science Foundation Advanced Technological Education (NSF ATE) division. It is advancing innovative educational pathways for technological education promoted at the community college level serving rural communities to fill…

  18. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    2001-01-01

    The Structures and Acoustics Division of the NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included in this report are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported is a synopsis of the work and accomplishments completed by the Division during the 1997, 1998, and 1999 calendar years. A bibliography containing 93 citations is provided.

  19. Conceptual assessment tool for advanced undergraduate electrodynamics

    NASA Astrophysics Data System (ADS)

    Baily, Charles; Ryan, Qing X.; Astolfi, Cecilia; Pollock, Steven J.

    2017-12-01

    As part of ongoing investigations into student learning in advanced undergraduate courses, we have developed a conceptual assessment tool for upper-division electrodynamics (E&M II): the Colorado UppeR-division ElectrodyNamics Test (CURrENT). This is a free response, postinstruction diagnostic with 6 multipart questions, an optional 3-question preinstruction test, and accompanying grading rubrics. The instrument's development was guided by faculty-consensus learning goals and research into common student difficulties. It can be used to gauge the effectiveness of transformed pedagogy, and to gain insights into student thinking in the covered topic areas. We present baseline data representing 500 students across 9 institutions, along with validity, reliability, and discrimination measures of the instrument and scoring rubric.

  20. Positive Factors Influencing the Advancement of Women to the Role of Head Athletic Trainer in the National Collegiate Athletic Association Divisions II and III

    PubMed Central

    Mazerolle, Stephanie M.; Eason, Christianne M.

    2016-01-01

    Context:  Research suggests that women do not pursue leadership positions in athletic training due to a variety of reasons, including family challenges, organizational constraints, and reluctance to hold the position. The literature has been focused on the National Collegiate Athletic Association Division I setting, limiting our full understanding. Objective:  To examine factors that help women as they worked toward the position of head athletic trainer. Design:  Qualitative study. Setting:  Divisions II and III. Patients or Other Participants:  Seventy-seven women who were employed as head athletic trainers at the Division II or III level participated in our study. Participants were 38 ± 9 (range = 24−57) years old and had an average of 14 ± 8 (range = 1−33) years of athletic training experience. Data Collection and Analysis:  We conducted online interviews. Participants journaled their reflections to a series of open-ended questions pertaining to their experiences as head athletic trainers. Data were analyzed using a general inductive approach. Credibility was secured by peer review and researcher triangulation. Results:  Three organizational facilitators emerged from the data, workplace atmosphere, mentors, and past work experiences. These organizational factors were directly tied to aspects within the athletic trainer's employment setting that allowed her to enter the role. One individual-level facilitator was found: personal attributes that were described as helpful for women in transitioning to the role of the head athletic trainer. Participants discussed being leaders and persisting toward their career goals. Conclusions:  Women working in Divisions II and III experience similar facilitators to assuming the role of head athletic trainer as those working in the Division I setting. Divisions II and III were viewed as more favorable for women seeking the role of head athletic trainer, but like those in the role in the Division I setting

  1. Physics division annual report 2006.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glover, J.; Physics

    2008-02-28

    This report highlights the activities of the Physics Division of Argonne National Laboratory in 2006. The Division's programs include the operation as a national user facility of ATLAS, the Argonne Tandem Linear Accelerator System, research in nuclear structure and reactions, nuclear astrophysics, nuclear theory, investigations in medium-energy nuclear physics as well as research and development in accelerator technology. The mission of nuclear physics is to understand the origin, evolution and structure of baryonic matter in the universe--the core of matter, the fuel of stars, and the basic constituent of life itself. The Division's research focuses on innovative new ways tomore » address this mission.« less

  2. Shugoshins function as a guardian for chromosomal stability in nuclear division.

    PubMed

    Yao, Yixin; Dai, Wei

    2012-07-15

    Accurate chromosome segregation during mitosis and meiosis is regulated and secured by several distinctly different yet intricately connected regulatory mechanisms. As chromosomal instability is a hallmark of a majority of tumors as well as a cause of infertility for germ cells, extensive research in the past has focused on the identification and characterization of molecular components that are crucial for faithful chromosome segregation during cell division. Shugoshins, including Sgo1 and Sgo2, are evolutionarily conserved proteins that function to protect sister chromatid cohesion, thus ensuring chromosomal stability during mitosis and meiosis in eukaryotes. Recent studies reveal that Shugoshins in higher animals play an essential role not only in protecting centromeric cohesion of sister chromatids and assisting bi-orientation attachment at the kinetochores, but also in safeguarding centriole cohesion/engagement during early mitosis. Many molecular components have been identified that play essential roles in modulating/mediating Sgo functions. This review primarily summarizes recent advances on the mechanisms of action of Shugoshins in suppressing chromosomal instability during nuclear division in eukaryotic organisms.

  3. The use of supercomputer modelling of high-temperature failure in pipe weldments to optimize weld and heat affected zone materials property selection

    NASA Astrophysics Data System (ADS)

    Wang, Z. P.; Hayhurst, D. R.

    1994-07-01

    The creep deformation and damage evolution in a pipe weldment has been modeled by using the finite-element continuum damage mechanics (CDM) method. The finite-element CDM computer program DAMAGE XX has been adapted to run with increased speed on a Cray XMP/416 supercomputer. Run times are sufficiently short (20 min) to permit many parametric studies to be carried out on vessel lifetimes for different weld and heat affected zone (HAZ) materials. Finite-element mesh sensitivity was studied first in order to select a mesh capable of correctly predicting experimentally observed results using at least possible computer time. A study was then made of the effect on the lifetime of a butt welded vessel of each of the commomly measured material parameters for the weld and HAZ materials. Forty different ferritic steel welded vessels were analyzed for a constant internal pressure of 45.5 MPa at a temperature of 565 C; each vessel having the same parent pipe material but different weld and HAZ materials. A lifetime improvement has been demonstrated of 30% over that obtained for the initial materials property data. A methodology for weldment design has been established which uses supercomputer-based CDM analysis techniques; it is quick to use, provides accurate results, and is a viable design tool.

  4. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  5. Fuel management in the Subtropical and Savanna divisions

    Treesearch

    Kenneth W. Outcalt

    2012-01-01

    The Subtropical Division (230) and Savanna Division (410), both based on Bailey’s (1996) ecoregions, are found in the Southern United States (http://www.na.fs.fed.us/fire/cwedocs/map%20new_divisions.pdf). The Subtropical Division occupies the southern Atlantic and Gulf coastal areas. It is characterized by a humid subtropical climate with hot humid summers (chapter 3...

  6. Concerted control of Escherichia coli cell division

    PubMed Central

    Osella, Matteo; Nugent, Eileen; Cosentino Lagomarsino, Marco

    2014-01-01

    The coordination of cell growth and division is a long-standing problem in biology. Focusing on Escherichia coli in steady growth, we quantify cell division control using a stochastic model, by inferring the division rate as a function of the observable parameters from large empirical datasets of dividing cells. We find that (i) cells have mechanisms to control their size, (ii) size control is effected by changes in the doubling time, rather than in the single-cell elongation rate, (iii) the division rate increases steeply with cell size for small cells, and saturates for larger cells. Importantly, (iv) the current size is not the only variable controlling cell division, but the time spent in the cell cycle appears to play a role, and (v) common tests of cell size control may fail when such concerted control is in place. Our analysis illustrates the mechanisms of cell division control in E. coli. The phenomenological framework presented is sufficiently general to be widely applicable and opens the way for rigorous tests of molecular cell-cycle models. PMID:24550446

  7. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Investigators Division Staff Facilities and Centers Staff Jobs Safety Personnel Resources Committees In Case of ? Click Here! Commitment to Safety at MSD In the Materials Sciences Division, our mission is to do world -class science in a safe environment. We proudly support a strong safety culture in which all staff and

  8. An examination of the stretching practices of Division I and Division III college football programs in the midwestern United States.

    PubMed

    Judge, Lawrence W; Craig, Bruce; Baudendistal, Steve; Bodey, Kimberly J

    2009-07-01

    Research supports the use of preactivity warm-up and stretching, and the purpose of this study was to determine whether college football programs follow these guidelines. Questionnaires designed to gather demographic, professional, and educational information, as well as specific pre- and postactivity practices, were distributed via e-mail to midwestern collegiate programs from NCAA Division I and III conferences. Twenty-three male coaches (12 from Division IA schools and 11 from Division III schools) participated in the study. Division I schools employed certified strength coaches (CSCS; 100%), whereas Division III schools used mainly strength coordinators (73%), with only 25% CSCS. All programs used preactivity warm-up, with the majority employing 2-5 minutes of sport-specific jogging/running drills. Pre stretching (5-10 minutes) was performed in 19 programs (91%), with 2 (9%) performing no pre stretching. Thirteen respondents used a combination of static/proprioceptive neuromuscular facilitation/ballistic and dynamic flexibility, 5 used only dynamic flexibility, and 1 used only static stretching. All 12 Division I coaches used stretching, whereas only 9 of the 11 Division III coaches did (p = 0.22). The results indicate that younger coaches did not use pre stretching (p = 0.30). The majority of the coaches indicated that they did use post stretching, with 11 of the 12 Division I coaches using stretching, whereas only 5 of the 11 Division III coaches used stretching postactivity (p = 0.027). Divisional results show that the majority of Division I coaches use static-style stretching (p = 0.049). The results of this study indicate that divisional status, age, and certification may influence how well research guidelines are followed. Further research is needed to delineate how these factors affect coaching decisions.

  9. Overview of the Applied Aerodynamics Division

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A major reorganization of the Aeronautics Directorate of the Langley Research Center occurred in early 1989. As a result of this reorganization, the scope of research in the Applied Aeronautics Division is now quite different than that in the past. An overview of the current organization, mission, and facilities of this division is presented. A summary of current research programs and sample highlights of recent research are also presented. This is intended to provide a general view of the scope and capabilities of the division.

  10. Gravity and the orientation of cell division

    NASA Technical Reports Server (NTRS)

    Helmstetter, C. E.

    1997-01-01

    A novel culture system for mammalian cells was used to investigate division orientations in populations of Chinese hamster ovary cells and the influence of gravity on the positioning of division axes. The cells were tethered to adhesive sites, smaller in diameter than a newborn cell, distributed over a nonadhesive substrate positioned vertically. The cells grew and divided while attached to the sites, and the angles and directions of elongation during anaphase, projected in the vertical plane, were found to be random with respect to gravity. However, consecutive divisions of individual cells were generally along the same axis or at 90 degrees to the previous division, with equal probability. Thus, successive divisions were restricted to orthogonal planes, but the choice of plane appeared to be random, unlike the ordered sequence of cleavage orientations seen during early embryo development.

  11. Advanced Very High Resolution Radiometer - AVHRR - NOAA Satellite

    Science.gov Websites

    Information System (NOAASIS); Office of Satellite and Product Operations » DOC » NOAA  » NESDIS » NOAASIS NOAA Satellite Information System Advanced Very High Resolution Radiometer - AVHRR The ) or the USGS AVHRR site. Satellite Products and Services Division Direct Services Branch Phone: 301

  12. History of the Fluids Engineering Division

    DOE PAGES

    Cooper, Paul; Martin, C. Samuel; O'Hern, Timothy J.

    2016-08-03

    The 90th Anniversary of the Fluids Engineering Division (FED) of ASME will be celebrated on July 10–14, 2016 in Washington, DC. The venue is ASME's Summer Heat Transfer Conference (SHTC), Fluids Engineering Division Summer Meeting (FEDSM), and International Conference on Nanochannels and Microchannels (ICNMM). The occasion is an opportune time to celebrate and reflect on the origin of FED and its predecessor—the Hydraulic Division (HYD), which existed from 1926–1963. Furthermore, the FED Executive Committee decided that it would be appropriate to publish concurrently a history of the HYD/FED.

  13. History of the Fluids Engineering Division

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Paul; Martin, C. Samuel; O'Hern, Timothy J.

    The 90th Anniversary of the Fluids Engineering Division (FED) of ASME will be celebrated on July 10–14, 2016 in Washington, DC. The venue is ASME's Summer Heat Transfer Conference (SHTC), Fluids Engineering Division Summer Meeting (FEDSM), and International Conference on Nanochannels and Microchannels (ICNMM). The occasion is an opportune time to celebrate and reflect on the origin of FED and its predecessor—the Hydraulic Division (HYD), which existed from 1926–1963. Furthermore, the FED Executive Committee decided that it would be appropriate to publish concurrently a history of the HYD/FED.

  14. About DCP | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) is the division of the National Cancer Institute (NCI) devoted to cancer prevention research. DCP provides funding and administrative support to clinical and laboratory researchers, community and multidisciplinary teams, and collaborative scientific networks. |

  15. Will Moores law be sufficient?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik P.

    2004-07-01

    It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moore's Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPSmore » (1 Zettaflops) . The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moore's Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moore's Law will be very high by today's standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops), depending on architecture, but distinctly below the level required for the most ambitious applications. Having established that Moore's Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moore's Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a 'multi-architecture' for supercomputers that combines a microprocessor with other 'advanced' concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the 'advanced' features would provide a boost to the limits of performance.« less

  16. Chemical Technology Division annual technical report, 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-05-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1990 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for coal- fired magnetohydrodynamics and fluidized-bed combustion; (3) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for a high-level waste repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams, concentrating plutonium solids in pyrochemical residues by aqueous biphase extraction, andmore » treating natural and process waters contaminated by volatile organic compounds; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for superconducting oxides and associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; and the geochemical processes responsible for trace-element migration within the earth's crust. The Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the scientific and engineering programs at Argonne National Laboratory (ANL). 66 refs., 69 figs., 6 tabs.« less

  17. Molecular coordination of Staphylococcus aureus cell division

    PubMed Central

    Cotterell, Bryony E; Walther, Christa G; Fenn, Samuel J; Grein, Fabian; Wollman, Adam JM; Leake, Mark C; Olivier, Nicolas; Cadby, Ashley; Mesnage, Stéphane; Jones, Simon

    2018-01-01

    The bacterial cell wall is essential for viability, but despite its ability to withstand internal turgor must remain dynamic to permit growth and division. Peptidoglycan is the major cell wall structural polymer, whose synthesis requires multiple interacting components. The human pathogen Staphylococcus aureus is a prolate spheroid that divides in three orthogonal planes. Here, we have integrated cellular morphology during division with molecular level resolution imaging of peptidoglycan synthesis and the components responsible. Synthesis occurs across the developing septal surface in a diffuse pattern, a necessity of the observed septal geometry, that is matched by variegated division component distribution. Synthesis continues after septal annulus completion, where the core division component FtsZ remains. The novel molecular level information requires re-evaluation of the growth and division processes leading to a new conceptual model, whereby the cell cycle is expedited by a set of functionally connected but not regularly distributed components. PMID:29465397

  18. Physics Division progress report, January 1, 1984-September 30, 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, W.E.

    1987-10-01

    This report provides brief accounts of significant progress in development activities and research results achieved by Physics Division personnel during the period January 1, 1984, through September 31, 1986. These efforts are representative of the three main areas of experimental research and development in which the Physics Division serves Los Alamos National Laboratory's and the Nation's needs in defense and basic sciences: (1) defense physics, including the development of diagnostic methods for weapons tests, weapon-related high-energy-density physics, and programs supporting the Strategic Defense Initiative; (2) laser physics and applications, especially to high-density plasmas; and (3) fundamental research in nuclear andmore » particle physics, condensed-matter physics, and biophysics. Throughout the report, emphasis is placed on the design, construction, and application of a variety of advanced, often unique, instruments and instrument systems that maintain the Division's position at the leading edge of research and development in the specific fields germane to its mission. A sampling of experimental systems of particular interest would include the relativistic electron-beam accelerator and its applications to high-energy-density plasmas; pulsed-power facilities; directed energy weapon devices such as free-electron lasers and neutral-particle-beam accelerators; high-intensity ultraviolet and x-ray beam lines at the National Synchrotron Light Source (at Brookhaven National Laboratory); the Aurora KrF ultraviolet laser system for projected use as an inertial fusion driver; antiproton physics facility at CERN; and several beam developments at the Los Alamos Meson Physics Facility for studying nuclear, condensed-matter, and biological physics, highlighted by progress in establishing the Los Alamos Neutron Scattering Center.« less

  19. GSFC Heliophysics Science Division 2008 Science Highlights

    NASA Technical Reports Server (NTRS)

    Gilbert, Holly R.; Strong, Keith T.; Saba, Julia L. R.; Firestone, Elaine R.

    2009-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2008, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 261 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include Lead science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Lead the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Provide access to measurements from the Heliophysics Great Observatory through our Science Information Systems, and Communicate science results to the public and inspire the next generation of scientists and explorers.

  20. GSFC Heliophysics Science Division 2009 Science Highlights

    NASA Technical Reports Server (NTRS)

    Strong, Keith T.; Saba, Julia L. R.; Strong, Yvonne M.

    2009-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2009, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 299 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include: Leading science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Leading the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Providing access to measurements from the Heliophysics Great Observatory through our Science Information Systems; and Communicating science results to the public and inspiring the next generation of scientists and explorers.

  1. Accessing Wind Tunnels From NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Becker, Jeff; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The NASA Ames wind tunnel customers are one of the first users of the Information Power Grid (IPG) storage system at the NASA Advanced Supercomputing Division. We wanted to be able to store their data on the IPG so that it could be accessed remotely in a secure but timely fashion. In addition, incorporation into the IPG allows future use of grid computational resources, e.g., for post-processing of data, or to do side-by-side CFD validation. In this paper, we describe the integration of grid data access mechanisms with the existing DARWIN web-based system that is used to access wind tunnel test data. We also show that the combined system has reasonable performance: wind tunnel data may be retrieved at 50Mbits/s over a 100 base T network connected to the IPG storage server.

  2. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  3. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swaminarayan, Sriram; Germann, Timothy C; Kadau, Kai

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementationmore » of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.« less

  4. About Us | Alaska Division of Geological & Geophysical Surveys

    Science.gov Websites

    Division of Geological & Geophysical Surveys (DGGS) 3354 College Road, Fairbanks, AK 99709 Phone: (907 Division also administers the 11-member Alaska Seismic Hazards Safety Commission. Accomplishments The . Department of Natural Resources, Division of Geological & Geophysical Surveys (DGGS) 3354 College Road

  5. Cognitive and Neural Sciences Division 1991 Programs

    DTIC Science & Technology

    1991-08-01

    FUNDING NUMBERS Cognitive and Neural Sciences Division 1991 Programs PE 61153N 6. AUTHOR(S) Edited by Willard S. Vaughan 7. PERFORMING ORGANIZATION...NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Office of Naval Research 0CNR !1491-19 Cognitive and Neural Sciences Division Code 1142...NOTES iN This is a compilation of abstracts representing R&D sponsored by the ONR Cognitive and Neural Sciences Division. 12a. DISTRIBUTION

  6. Earth Sciences Division

    NASA Astrophysics Data System (ADS)

    1991-06-01

    This Annual Report presents summaries of selected representative research activities grouped according to the principal disciplines of the Earth Sciences Division: Reservoir Engineering and Hydrogeology, Geology and Geochemistry, and Geophysics and Geomechanics. Much of the Division's research deals with the physical and chemical properties and processes in the earth's crust, from the partially saturated, low-temperature near-surface environment to the high-temperature environments characteristic of regions where magmatic-hydrothermal processes are active. Strengths in laboratory and field instrumentation, numerical modeling, and in situ measurement allow study of the transport of mass and heat through geologic media -- studies that now include the appropriate chemical reactions and the hydraulic-mechanical complexities of fractured rock systems. Of particular note are three major Division efforts addressing problems in the discovery and recovery of petroleum, the application of isotope geochemistry to the study of geodynamic processes and earth history, and the development of borehole methods for high-resolution imaging of the subsurface using seismic and electromagnetic waves. In 1989, a major DOE-wide effort was launched in the areas of Environmental Restoration and Waste Management. Many of the methods previously developed for and applied to deeper regions of the earth will, in the coming years, be turned toward process definition and characterization of the very shallow subsurface, where man-induced contaminants now intrude and where remedial action is required.

  7. Teaching Cell Division: Basics and Recommendations.

    ERIC Educational Resources Information Center

    Smith, Mike U.; Kindfield, Ann C. H.

    1999-01-01

    Presents a concise overview of cell division that includes only the essential concepts necessary for understanding genetics and evolution. Makes recommendations based on published research and teaching experiences that can be used to judge the merits of potential activities and materials for teaching cell division. Makes suggestions regarding the…

  8. Investigation of Alien Wavelength Quality in Live Multi-Domain, Multi-Vendor Link Using Advanced Simulation Tool

    NASA Astrophysics Data System (ADS)

    Nordal Petersen, Martin; Nuijts, Roeland; Lange Bjørn, Lars

    2014-05-01

    This article presents an advanced optical model for simulation of alien wavelengths in multi-domain and multi-vendor dense wavelength-division multiplexing networks. The model aids optical network planners with a better understanding of the non-linear effects present in dense wavelength-division multiplexing systems and better utilization of alien wavelengths in future applications. The limiting physical effects for alien wavelengths are investigated in relation to power levels, channel spacing, and other factors. The simulation results are verified through experimental setup in live multi-domain dense wavelength-division multiplexing systems between two national research networks: SURFnet in Holland and NORDUnet in Denmark.

  9. Reconciling Divisions in the Field of Authentic Education

    ERIC Educational Resources Information Center

    Sarid, Ariel

    2015-01-01

    The aim of this article is twofold: first, to identify and address three central divisions in the field of authentic education that introduce ambiguity and at times inconsistencies within the field of authentic education. These divisions concern a) the relationship between autonomy and authenticity; b) the division between the two basic attitudes…

  10. Advanced Pacemaker

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Synchrony, developed by St. Jude Medical's Cardiac Rhythm Management Division (formerly known as Pacesetter Systems, Inc.) is an advanced state-of-the-art implantable pacemaker that closely matches the natural rhythm of the heart. The companion element of the Synchrony Pacemaker System is the Programmer Analyzer APS-II which allows a doctor to reprogram and fine tune the pacemaker to each user's special requirements without surgery. The two-way communications capability that allows the physician to instruct and query the pacemaker is accomplished by bidirectional telemetry. APS-II features 28 pacing functions and thousands of programming combinations to accommodate diverse lifestyles. Microprocessor unit also records and stores pertinent patient data up to a year.

  11. Chromosome segregation drives division site selection in Streptococcus pneumoniae.

    PubMed

    van Raaphorst, Renske; Kjos, Morten; Veening, Jan-Willem

    2017-07-18

    Accurate spatial and temporal positioning of the tubulin-like protein FtsZ is key for proper bacterial cell division. Streptococcus pneumoniae (pneumococcus) is an oval-shaped, symmetrically dividing opportunistic human pathogen lacking the canonical systems for division site control (nucleoid occlusion and the Min-system). Recently, the early division protein MapZ was identified and implicated in pneumococcal division site selection. We show that MapZ is important for proper division plane selection; thus, the question remains as to what drives pneumococcal division site selection. By mapping the cell cycle in detail, we show that directly after replication both chromosomal origin regions localize to the future cell division sites, before FtsZ. Interestingly, Z-ring formation occurs coincidently with initiation of DNA replication. Perturbing the longitudinal chromosomal organization by mutating the condensin SMC, by CRISPR/Cas9-mediated chromosome cutting, or by poisoning DNA decatenation resulted in mistiming of MapZ and FtsZ positioning and subsequent cell elongation. Together, we demonstrate an intimate relationship between DNA replication, chromosome segregation, and division site selection in the pneumococcus, providing a simple way to ensure equally sized daughter cells.

  12. 1. Oblique view of 215 Division Street, looking southwest, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Oblique view of 215 Division Street, looking southwest, showing front (east) facade and north side, 213 Division Street is visible at left and 217 Division Street appears at right - 215 Division Street (House), Rome, Floyd County, GA

  13. Publications - AR 2010 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Visiting Alaska State Employees DGGS State of Alaska search Alaska Division of Geological & Geophysical DGGS AR 2010 Publication Details Title: Alaska Division of Geological & Geophysical Surveys Annual Report Authors: DGGS Staff Publication Date: Jan 2011 Publisher: Alaska Division of Geological &

  14. Atmospheric and Geophysical Sciences Division Program Report, 1988--1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-06-01

    In 1990, the Atmospheric and Geophysical Sciences Division begins its 17th year as a division. As the Division has grown over the years, its modeling capabilities have expanded to include a broad range of time and space scales ranging from hours to decades and from local to global. Our modeling is now reaching out from its atmospheric focus to treat linkages with the oceans and the land. In this report, we describe the Division's goal and organizational structure. We also provide tables and appendices describing the Division's budget, personnel, models, and publications. 2 figs., 1 tab.

  15. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders. (a...

  16. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders. (a...

  17. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders. (a...

  18. Criminal Division - Alaska Department of Law

    Science.gov Websites

    Criminal Division The Criminal Division works to assure safe and healthy communities by prosecuting and live in safe and healthy communities. The day-to-day prosecution services are carried out by the implicated in environmental crimes from further operations that damage the environment. The ECU is partially

  19. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders. (a...

  20. 25 CFR 227.19 - Division orders.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Division orders. 227.19 Section 227.19 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND RIVER INDIAN RESERVATION, WYOMING, FOR OIL AND GAS MINING Rents and Royalties § 227.19 Division orders. (a...

  1. The stem cell division theory of cancer.

    PubMed

    López-Lázaro, Miguel

    2018-03-01

    All cancer registries constantly show striking differences in cancer incidence by age and among tissues. For example, lung cancer is diagnosed hundreds of times more often at age 70 than at age 20, and lung cancer in nonsmokers occurs thousands of times more frequently than heart cancer in smokers. An analysis of these differences using basic concepts in cell biology indicates that cancer is the end-result of the accumulation of cell divisions in stem cells. In other words, the main determinant of carcinogenesis is the number of cell divisions that the DNA of a stem cell has accumulated in any type of cell from the zygote. Cell division, process by which a cell copies and separates its cellular components to finally split into two cells, is necessary to produce the large number of cells required for living. However, cell division can lead to a variety of cancer-promoting errors, such as mutations and epigenetic mistakes occurring during DNA replication, chromosome aberrations arising during mitosis, errors in the distribution of cell-fate determinants between the daughter cells, and failures to restore physical interactions with other tissue components. Some of these errors are spontaneous, others are promoted by endogenous DNA damage occurring during quiescence, and others are influenced by pathological and environmental factors. The cell divisions required for carcinogenesis are primarily caused by multiple local and systemic physiological signals rather than by errors in the DNA of the cells. As carcinogenesis progresses, the accumulation of DNA errors promotes cell division and eventually triggers cell division under permissive extracellular environments. The accumulation of cell divisions in stem cells drives not only the accumulation of the DNA alterations required for carcinogenesis, but also the formation and growth of the abnormal cell populations that characterize the disease. This model of carcinogenesis provides a new framework for understanding the

  2. Couples' Attitudes, Childbirth, and the Division of Labor

    ERIC Educational Resources Information Center

    Jansen, Miranda; Liefbroer, Aart C.

    2006-01-01

    In this article, the authors examine effects of partners' attitudes on the timing of the birth of a first child, the division of domestic labor, the division of child care, and the division of paid labor of couples. They use data from the Panel Study of Social Integration in the Netherlands, which includes independent measures of both partners'…

  3. Genomic Advances to Improve Biomass for Biofuels (Genomics and Bioenergy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rokhsar, Daniel

    2008-02-11

    Lawrence Berkeley National Lab bioscientist Daniel Rokhsar discusses genomic advances to improve biomass for biofuels. He presented his talk Feb. 11, 2008 in Berkeley, California as part of Berkeley Lab's community lecture series. Rokhsar works with the U.S. Department of Energy's Joint Genome Institute and Berkeley Lab's Genomics Division.

  4. Biology Division progress report, October 1, 1991--September 30, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, F.C.; Cook, J.S.

    This Progress Report summarizes the research endeavors of the Biology Division of the Oak Ridge National Laboratory during the period October 1, 1991, through September 30, 1993. The report is structured to provide descriptions of current activities and accomplishments in each of the Division`s major organizational units. Lists of information to convey the entire scope of the Division`s activities are compiled at the end of the report.

  5. NAS Applications and Advanced Algorithms

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Biswas, Rupak; VanDerWijngaart, Rob; Kutler, Paul (Technical Monitor)

    1997-01-01

    This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.

  6. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  7. The Application of Ground-Penetrating Radar to Transportation Engineering: Recent Advances and New Perspectives (GI Division Outstanding ECS Award Lecture)

    NASA Astrophysics Data System (ADS)

    Tosti, Fabio; Benedetto, Andrea; Pajewski, Lara; Alani, Amir M.

    2017-04-01

    aims at presenting the recent advances and the new perspectives in the application of GPR to transportation engineering. This study reports on new experimental-based and theoretical models for the assessment of the physical (i.e., clay and water content in subgrade soils, railway ballast fouling) and the mechanical (i.e., the Young's modulus of elasticity) properties that are critical in maintaining the structural stability and the bearing capacity of the major transport infrastructures, such as highways, railways and airfields. With regard to the physical parameters, the electromagnetic behaviour related to the clay content in the load-bearing layers of flexible pavements as well as in subgrade soils has been analysed and modelled in both dry and wet conditions. Furthermore, it is discussed a new simulation-based methodology for the detection of the fouling content in railway ballast. Concerning the mechanical parameters, experimental based methods are presented for the assessment of the strength and deformation properties of the soils and the top-bounded layers of flexible pavements. Furthermore, unique case studies in terms of the methodology proposed, the survey planning and the site procedures in rather complex operations, are discussed in the case of bridges and tunnels inspections. Acknowledgements The Authors are grateful to the GI Division President Dr. Francesco Soldovieri and the relevant Award Committee in the context of the "GI Division Outstanding Early Career Scientists Award" of the European Geosciences Union. We also acknowledge the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" for providing networking and discussion opportunities throughout its activity and operation as well as facilitating prospect for publishing research outputs.

  8. Parallel optoelectronic trinary signed-digit division

    NASA Astrophysics Data System (ADS)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  9. The representation of multiplication and division facts in memory.

    PubMed

    De Brauwer, Jolien; Fias, Wim

    2011-01-01

    Recently, using a training paradigm, Campbell and Agnew (2009) observed cross-operation response time savings with nonidentical elements (e.g., practice 3 + 2, test 5 - 2) for addition and subtraction, showing that a single memory representation underlies addition and subtraction performance. Evidence for cross-operation savings between multiplication and division have been described frequently (e.g., Campbell, Fuchs-Lacelle, & Phenix, 2006) but they have always been attributed to a mediation strategy (reformulating a division problem as a multiplication problem, e.g., Campbell et al., 2006). Campbell and Agnew (2009) therefore concluded that there exists a fundamental difference between addition and subtraction on the one hand and multiplication and division on the other hand. However, our results suggest that retrieval savings between inverse multiplication and division problems can be observed. Even for small problems (solved by direct retrieval) practicing a division problem facilitated the corresponding multiplication problem and vice versa. These findings indicate that shared memory representations underlie multiplication and division retrieval. Hence, memory and learning processes do not seem to differ fundamentally between addition-subtraction and multiplication-division.

  10. Friday's Agenda | Division of Cancer Prevention

    Cancer.gov

    TimeAgenda8:00 am - 8:10 amWelcome and Opening RemarksLeslie Ford, MDAssociate Director for Clinical ResearchDivision of Cancer Prevention, NCIEva Szabo, MD Chief, Lung and Upper Aerodigestive Cancer Research GroupDivision of Cancer Prevention, NCI8:10 am - 8:40 amClinical Trials Statistical Concepts for Non-StatisticiansKevin Dodd, PhD |

  11. Physics division. Progress report, January 1, 1995--December 31, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, M.; Bacon, D.S.; Aine, C.J.

    1997-10-01

    This issue of the Physics Division Progress Report describes progress and achievements in Physics Division research during the period January 1, 1995-December 31, 1996. The report covers the five main areas of experimental research and development in which Physics Division serves the needs of Los Alamos National Laboratory and the nation in applied and basic sciences: (1) biophysics, (2) hydrodynamic physics, (3) neutron science and technology, (4) plasma physics, and (5) subatomic physics. Included in this report are a message from the Division Director, the Physics Division mission statement, an organizational chart, descriptions of the research areas of the fivemore » groups in the Division, selected research highlights, project descriptions, the Division staffing and funding levels for FY95-FY97, and a list of publications and presentations.« less

  12. The AAPT Advanced Laboratory Task Force Report

    NASA Astrophysics Data System (ADS)

    Dunham, Jeffrey

    2008-04-01

    In late 2005, the American Association of Physics Teachers (AAPT) assembled a seven-member Advanced Laboratory Task Force^ to recommend ways that AAPT could increase the degree and effectiveness of its interactions with physics teachers of upper-division physics laboratories, with the ultimate goal of improving the teaching of advanced laboratories. The task force completed its work during the first half of 2006 and its recommendations were presented to the AAPT Executive Committee in July 2006. This talk will present the recommendations of the task force and actions taken by AAPT in response to them. The curricular goals of the advanced laboratory course at various institutions will also be discussed. The talk will conclude with an appeal to the APS membership to support ongoing efforts to revitalize advanced laboratory course instruction. ^Members of the Advanced Laboratory Task Force: Van Bistrow, University of Chicago; Bob DeSerio, University of Florida; Jeff Dunham, Middlebury College (Chair); Elizabeth George, Wittenburg University; Daryl Preston, California State University, East Bay; Patricia Sparks, Harvey Mudd College; Gerald Taylor, James Madison University; and David Van Baak, Calvin College.

  13. Madden–Julian Oscillation prediction skill of a new-generation global model demonstrated using a supercomputer

    PubMed Central

    Miyakawa, Tomoki; Satoh, Masaki; Miura, Hiroaki; Tomita, Hirofumi; Yashiro, Hisashi; Noda, Akira T.; Yamada, Yohei; Kodama, Chihiro; Kimoto, Masahide; Yoneyama, Kunio

    2014-01-01

    Global cloud/cloud system-resolving models are perceived to perform well in the prediction of the Madden–Julian Oscillation (MJO), a huge eastward -propagating atmospheric pulse that dominates intraseasonal variation of the tropics and affects the entire globe. However, owing to model complexity, detailed analysis is limited by computational power. Here we carry out a simulation series using a recently developed supercomputer, which enables the statistical evaluation of the MJO prediction skill of a costly new-generation model in a manner similar to operational forecast models. We estimate the current MJO predictability of the model as 27 days by conducting simulations including all winter MJO cases identified during 2003–2012. The simulated precipitation patterns associated with different MJO phases compare well with observations. An MJO case captured in a recent intensive observation is also well reproduced. Our results reveal that the global cloud-resolving approach is effective in understanding the MJO and in providing month-long tropical forecasts. PMID:24801254

  14. Architecture and design of a 500-MHz gallium-arsenide processing element for a parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.; Butner, Steven E.

    1991-01-01

    The design of the processing element of GASP, a GaAs supercomputer with a 500-MHz instruction issue rate and 1-GHz subsystem clocks, is presented. The novel, functionally modular, block data flow architecture of GASP is described. The architecture and design of a GASP processing element is then presented. The processing element (PE) is implemented in a hybrid semiconductor module with 152 custom GaAs ICs of eight different types. The effects of the implementation technology on both the system-level architecture and the PE design are discussed. SPICE simulations indicate that parts of the PE are capable of being clocked at 1 GHz, while the rest of the PE uses a 500-MHz clock. The architecture utilizes data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture indicates that an instruction execution rate of over 30,000 MIPS can be attained with 65 PEs.

  15. Madden-Julian Oscillation prediction skill of a new-generation global model demonstrated using a supercomputer.

    PubMed

    Miyakawa, Tomoki; Satoh, Masaki; Miura, Hiroaki; Tomita, Hirofumi; Yashiro, Hisashi; Noda, Akira T; Yamada, Yohei; Kodama, Chihiro; Kimoto, Masahide; Yoneyama, Kunio

    2014-05-06

    Global cloud/cloud system-resolving models are perceived to perform well in the prediction of the Madden-Julian Oscillation (MJO), a huge eastward -propagating atmospheric pulse that dominates intraseasonal variation of the tropics and affects the entire globe. However, owing to model complexity, detailed analysis is limited by computational power. Here we carry out a simulation series using a recently developed supercomputer, which enables the statistical evaluation of the MJO prediction skill of a costly new-generation model in a manner similar to operational forecast models. We estimate the current MJO predictability of the model as 27 days by conducting simulations including all winter MJO cases identified during 2003-2012. The simulated precipitation patterns associated with different MJO phases compare well with observations. An MJO case captured in a recent intensive observation is also well reproduced. Our results reveal that the global cloud-resolving approach is effective in understanding the MJO and in providing month-long tropical forecasts.

  16. Direct numerical simulation of the laminar-turbulent transition at hypersonic flow speeds on a supercomputer

    NASA Astrophysics Data System (ADS)

    Egorov, I. V.; Novikov, A. V.; Fedorov, A. V.

    2017-08-01

    A method for direct numerical simulation of three-dimensional unsteady disturbances leading to a laminar-turbulent transition at hypersonic flow speeds is proposed. The simulation relies on solving the full three-dimensional unsteady Navier-Stokes equations. The computational technique is intended for multiprocessor supercomputers and is based on a fully implicit monotone approximation scheme and the Newton-Raphson method for solving systems of nonlinear difference equations. This approach is used to study the development of three-dimensional unstable disturbances in a flat-plate and compression-corner boundary layers in early laminar-turbulent transition stages at the free-stream Mach number M = 5.37. The three-dimensional disturbance field is visualized in order to reveal and discuss features of the instability development at the linear and nonlinear stages. The distribution of the skin friction coefficient is used to detect laminar and transient flow regimes and determine the onset of the laminar-turbulent transition.

  17. Lead from the center. How to manage divisions dynamically.

    PubMed

    Raynor, M E; Bower, J L

    2001-05-01

    Conventional wisdom holds that a company's divisions should be given almost total autonomy--especially under conditions of uncertainty--because they are closer to emerging technologies, customers, and competitors than corporate headquarters could ever be. But research from Michael Raynor and Joseph Bower suggests that the corporate office should be more, not less, directive in turbulent markets. Rapid changes in an industry make it difficult to predict where and when synergies among divisions might emerge. With so many possibilities and such uncertainty, companies can't afford to sacrifice their ability to flexibly execute business strategy. Corporate headquarters must play an active role in defining the scope of division-level strategy, the authors say, so that divisions do not act in ways that undermine opportunities to collaborate in the future. But neither can companies afford to sacrifice the competitiveness of their divisions as stand-alone businesses. In creating corporate-level strategic flexibility, a corporate office must balance the need for divisional autonomy now with the potential need for cooperation in the future. Through an examination of four corporations--Sprint, WPP, Teradyne, and Viacom--the authors challenge traditional approaches to diversification in which a company's divisions are either related (they share resources and collaborate) or unrelated (they compete for resources and operate as stand-alone businesses). They argue that companies should adopt a dynamic approach to cooperation among divisions, enabling varying degrees of relatedness between divisions depending on strategic circumstances. The authors offer four tactics to help executives manage divisions dynamically.

  18. 24 CFR 4.36 - Action by the Ethics Law Division.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Action by the Ethics Law Division... the Ethics Law Division. (a) After review of the Inspector General's report, the Ethics Law Division... that a violation of Section 103 or this subpart B has occurred. (b) If the Ethics Law Division...

  19. 24 CFR 4.36 - Action by the Ethics Law Division.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Action by the Ethics Law Division... the Ethics Law Division. (a) After review of the Inspector General's report, the Ethics Law Division... that a violation of Section 103 or this subpart B has occurred. (b) If the Ethics Law Division...

  20. 24 CFR 4.36 - Action by the Ethics Law Division.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Action by the Ethics Law Division... the Ethics Law Division. (a) After review of the Inspector General's report, the Ethics Law Division... that a violation of Section 103 or this subpart B has occurred. (b) If the Ethics Law Division...

  1. 24 CFR 4.36 - Action by the Ethics Law Division.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Action by the Ethics Law Division... the Ethics Law Division. (a) After review of the Inspector General's report, the Ethics Law Division... that a violation of Section 103 or this subpart B has occurred. (b) If the Ethics Law Division...

  2. 24 CFR 4.36 - Action by the Ethics Law Division.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Action by the Ethics Law Division... the Ethics Law Division. (a) After review of the Inspector General's report, the Ethics Law Division... that a violation of Section 103 or this subpart B has occurred. (b) If the Ethics Law Division...

  3. Cell division is dispensable but not irrelevant in Streptomyces.

    PubMed

    McCormick, Joseph R

    2009-12-01

    In part, members of the genus Streptomyces have been studied because they produce many important secondary metabolites with antibiotic activity and for the interest in their relatively elaborate life cycle. These sporulating filamentous bacteria are remarkably synchronous for division and genome segregation in specialized aerial hyphae. Streptomycetes share some, but not all, of the division genes identified in the historic model rod-shaped organisms. Curiously, normally essential cell division genes are dispensable for growth and viability of Streptomyces coelicolor. Mainly, cell division plays a more important role in the developmental phase of life than during vegetative growth. Dispensability provides an advantageous genetic system to probe the mechanisms of division proteins, especially those with functions that are poorly understood.

  4. Technical Advancements in Simulator-Based Weapons Team Training.

    DTIC Science & Technology

    1991-04-01

    Acting Head H.C. OKRASKI, Director Advanced Simulation Concepts Research and Engineering Division Department SPECIAL REPORT 91-003 GOVERNMENT RIGHTS...City. State, and ZWCode)I12350 Research Parkway Orlando-, FL 32826-3224 SS... NAME OF FUNDING / SPONSORING 8 b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT...necessary and idmntify’ by block number) The research and development reported here represents one phase of a broader effort to improve the

  5. 28 CFR 3.2 - Assistant Attorney General, Criminal Division.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Assistant Attorney General, Criminal Division. 3.2 Section 3.2 Judicial Administration DEPARTMENT OF JUSTICE GAMBLING DEVICES § 3.2 Assistant Attorney General, Criminal Division. The Assistant Attorney General, Criminal Division, is authorized to...

  6. 28 CFR 3.2 - Assistant Attorney General, Criminal Division.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Assistant Attorney General, Criminal Division. 3.2 Section 3.2 Judicial Administration DEPARTMENT OF JUSTICE GAMBLING DEVICES § 3.2 Assistant Attorney General, Criminal Division. The Assistant Attorney General, Criminal Division, is authorized to...

  7. 28 CFR 3.2 - Assistant Attorney General, Criminal Division.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Assistant Attorney General, Criminal Division. 3.2 Section 3.2 Judicial Administration DEPARTMENT OF JUSTICE GAMBLING DEVICES § 3.2 Assistant Attorney General, Criminal Division. The Assistant Attorney General, Criminal Division, is authorized to...

  8. 28 CFR 3.2 - Assistant Attorney General, Criminal Division.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Assistant Attorney General, Criminal Division. 3.2 Section 3.2 Judicial Administration DEPARTMENT OF JUSTICE GAMBLING DEVICES § 3.2 Assistant Attorney General, Criminal Division. The Assistant Attorney General, Criminal Division, is authorized to...

  9. Division of labour in the yeast: Saccharomyces cerevisiae.

    PubMed

    Wloch-Salamon, Dominika M; Fisher, Roberta M; Regenberg, Birgitte

    2017-10-01

    Division of labour between different specialized cell types is a central part of how we describe complexity in multicellular organisms. However, it is increasingly being recognized that division of labour also plays an important role in the lives of predominantly unicellular organisms. Saccharomyces cerevisiae displays several phenotypes that could be considered a division of labour, including quiescence, apoptosis and biofilm formation, but they have not been explicitly treated as such. We discuss each of these examples, using a definition of division of labour that involves phenotypic variation between cells within a population, cooperation between cells performing different tasks and maximization of the inclusive fitness of all cells involved. We then propose future research directions and possible experimental tests using S. cerevisiae as a model organism for understanding the genetic mechanisms and selective pressures that can lead to the evolution of the very first stages of a division of labour. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers

    DOE PAGES

    Basu, Protonu; Williams, Samuel; Van Straalen, Brian; ...

    2017-04-05

    GPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model like CUDA, OpenCL, or OpenACC. Thus, in order to deliver portability across CPU-based and GPU-accelerated supercomputers, programmers are forced to write and maintain two versions of their applications or frameworks. In this paper, we explore the use of a compiler-based autotuning framework based on CUDA-CHiLL to deliver not only portability, but also performance portability across CPU- and GPU-accelerated platforms for the geometric multigrid linear solvers found inmore » many scientific applications. We also show that with autotuning we can attain near Roofline (a performance bound for a computation and target architecture) performance across the key operations in the miniGMG benchmark for both CPU- and GPU-based architectures as well as for a multiple stencil discretizations and smoothers. We show that our technology is readily interoperable with MPI resulting in performance at scale equal to that obtained via hand-optimized MPI+CUDA implementation.« less

  11. Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, Protonu; Williams, Samuel; Van Straalen, Brian

    GPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model like CUDA, OpenCL, or OpenACC. Thus, in order to deliver portability across CPU-based and GPU-accelerated supercomputers, programmers are forced to write and maintain two versions of their applications or frameworks. In this paper, we explore the use of a compiler-based autotuning framework based on CUDA-CHiLL to deliver not only portability, but also performance portability across CPU- and GPU-accelerated platforms for the geometric multigrid linear solvers found inmore » many scientific applications. We also show that with autotuning we can attain near Roofline (a performance bound for a computation and target architecture) performance across the key operations in the miniGMG benchmark for both CPU- and GPU-based architectures as well as for a multiple stencil discretizations and smoothers. We show that our technology is readily interoperable with MPI resulting in performance at scale equal to that obtained via hand-optimized MPI+CUDA implementation.« less

  12. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less

  13. The distinctive cell division interactome of Neisseria gonorrhoeae.

    PubMed

    Zou, Yinan; Li, Yan; Dillon, Jo-Anne R

    2017-12-12

    Bacterial cell division is an essential process driven by the formation of a Z-ring structure, as a cytoskeletal scaffold at the mid-cell, followed by the recruitment of various proteins which form the divisome. The cell division interactome reflects the complement of different interactions between all divisome proteins. To date, only two cell division interactomes have been characterized, in Escherichia coli and in Streptococcus pneumoniae. The cell divison proteins encoded by Neisseria gonorrhoeae include FtsZ, FtsA, ZipA, FtsK, FtsQ, FtsI, FtsW, and FtsN. The purpose of the present study was to characterize the cell division interactome of N. gonorrhoeae using several different methods to identify protein-protein interactions. We also characterized the specific subdomains of FtsA implicated in interactions with FtsZ, FtsQ, FtsN and FtsW. Using a combination of bacterial two-hybrid (B2H), glutathione S-transferase (GST) pull-down assays, and surface plasmon resonance (SPR), nine interactions were observed among the eight gonococcal cell division proteins tested. ZipA did not interact with any other cell division proteins. Comparisons of the N. gonorrhoeae cell division interactome with the published interactomes from E. coli and S. pneumoniae indicated that FtsA-FtsZ and FtsZ-FtsK interactions were common to all three species. FtsA-FtsW and FtsK-FtsN interactions were only present in N. gonorrhoeae. The 2A and 2B subdomains of FtsA Ng were involved in interactions with FtsQ, FtsZ, and FtsN, and the 2A subdomain was involved in interaction with FtsW. Results from this research indicate that N. gonorrhoeae has a distinctive cell division interactome as compared with other microorganisms.

  14. Estimating division and death rates from CFSE data

    NASA Astrophysics Data System (ADS)

    de Boer, Rob J.; Perelson, Alan S.

    2005-12-01

    The division tracking dye, carboxyfluorescin diacetate succinimidyl ester (CFSE) is currently the most informative labeling technique for characterizing the division history of cells in the immune system. Gett and Hodgkin (Nat. Immunol. 1 (2000) 239-244) have proposed to normalize CFSE data by the 2-fold expansion that is associated with each division, and have argued that the mean of the normalized data increases linearly with time, t, with a slope reflecting the division rate p. We develop a number of mathematical models for the clonal expansion of quiescent cells after stimulation and show, within the context of these models, under which conditions this approach is valid. We compare three means of the distribution of cells over the CFSE profile at time t: the mean, [mu](t), the mean of the normalized distribution, [mu]2(t), and the mean of the normalized distribution excluding nondivided cells, .In the simplest models, which deal with homogeneous populations of cells with constant division and death rates, the normalized frequency distribution of the cells over the respective division numbers is a Poisson distribution with mean [mu]2(t)=pt, where p is the division rate. The fact that in the data these distributions seem Gaussian is therefore insufficient to establish that the times at which cells are recruited into the first division have a Gaussian variation because the Poisson distribution approaches the Gaussian distribution for large pt. Excluding nondivided cells complicates the data analysis because , and only approaches a slope p after an initial transient.In models where the first division of the quiescent cells takes longer than later divisions, all three means have an initial transient before they approach an asymptotic regime, which is the expected [mu](t)=2pt and . Such a transient markedly complicates the data analysis. After the same initial transients, the normalized cell numbers tend to decrease at a rate e-dt, where d is the death rate

  15. GSFC Heliophysics Science Division FY2010 Annual Report

    NASA Technical Reports Server (NTRS)

    Gilbert, Holly R.; Strong, Keith T.; Saba, Julia L. R.; Clark, Judith B.; Kilgore, Robert W.; Strong, Yvonne M.

    2010-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2010, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 323 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include: Leading science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Leading the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Providing access to measurements from the Heliophysics Great Observatory through our Science Information Systems; and Communicating science results to the public and inspiring the next generation of scientists and explorers.

  16. Quantitative regulation of B cell division destiny by signal strength.

    PubMed

    Turner, Marian L; Hawkins, Edwin D; Hodgkin, Philip D

    2008-07-01

    Differentiation to Ab secreting and isotype-switched effector cells is tightly linked to cell division and therefore the degree of proliferation strongly influences the nature of the immune response. The maximum number of divisions reached, termed the population division destiny, is stochastically distributed in the population and is an important parameter in the quantitative outcome of lymphocyte responses. In this study, we further assessed the variables that regulate B cell division destiny in vitro in response to T cell- and TLR-dependent stimuli. Both the concentration and duration of stimulation were able to regulate the average maximum number of divisions undergone for each stimulus. Notably, a maximum division destiny was reached during provision of repeated saturating stimulation, revealing that an intrinsic limit to proliferation exists even under these conditions. This limit was linked directly to division number rather than time of exposure to stimulation and operated independently of the survival regulation of the cells. These results demonstrate that a B cell population's division destiny is regulable by the stimulatory conditions up to an inherent maximum value. Division destiny is a crucial parameter in regulating the extent of B cell responses and thereby also the nature of the immune response mounted.

  17. Structures Division

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1995 are presented.

  18. Public division about climate change rooted in conflicting socio-political identities

    NASA Astrophysics Data System (ADS)

    Bliuc, Ana-Maria; McGarty, Craig; Thomas, Emma F.; Lala, Girish; Berndsen, Mariette; Misajon, Roseanne

    2015-03-01

    Of the climate science papers that take a position on the issue, 97% agree that climate change is caused by humans, but less than half of the US population shares this belief. This misalignment between scientific and public views has been attributed to a range of factors, including political attitudes, socio-economic status, moral values, levels of scientific understanding, and failure of scientific communication. The public is divided between climate change 'believers' (whose views align with those of the scientific community) and 'sceptics' (whose views are in disagreement with those of the scientific community). We propose that this division is best explained as a socio-political conflict between these opposing groups. Here we demonstrate that US believers and sceptics have distinct social identities, beliefs and emotional reactions that systematically predict their support for action to advance their respective positions. The key implication is that the divisions between sceptics and believers are unlikely to be overcome solely through communication and education strategies, and that interventions that increase angry opposition to action on climate change are especially problematic. Thus, strategies for building support for mitigation policies should go beyond attempts to improve the public’s understanding of science, to include approaches that transform intergroup relations.

  19. Biology Division progress report, October 1, 1993--September 30, 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-10-01

    This Progress Report summarizes the research endeavors of the Biology Division of the Oak Ridge National Laboratory during the period October 1, 1993, through September 30, 1995. The report is structured to provide descriptions of current activities and accomplishments in each of the Division`s major organizational units. Lists of information to convey the entire scope of the Division`s activities are compiled at the end of the report. Attention is focused on the following research activities: molecular, cellular, and cancer biology; mammalian genetics and development; genome mapping program; and educational activities.

  20. 19 CFR 181.92 - Definitions and general NAFTA advance ruling practice.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 2 2014-04-01 2014-04-01 false Definitions and general NAFTA advance ruling practice. 181.92 Section 181.92 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND... National Commodity Specialist Division or by such other office as designated by the Commissioner of Customs...

  1. 19 CFR 181.92 - Definitions and general NAFTA advance ruling practice.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 2 2013-04-01 2013-04-01 false Definitions and general NAFTA advance ruling practice. 181.92 Section 181.92 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND... National Commodity Specialist Division or by such other office as designated by the Commissioner of Customs...

  2. Advances in the understanding of dairy and cheese flavors: Symposium Introduction

    USDA-ARS?s Scientific Manuscript database

    A symposium titled “Advances in the Understanding of Dairy and Cheese Flavors” was held in September 2013 at the American Chemical Society’s 246th National Meeting in Indianapolis, IN. The symposium, which was sponsored by the Division of Agricultural and Food Chemistry, was to discuss the state of...

  3. Parkin suppresses Drp1-independent mitochondrial division.

    PubMed

    Roy, Madhuparna; Itoh, Kie; Iijima, Miho; Sesaki, Hiromi

    2016-07-01

    The cycle of mitochondrial division and fusion disconnect and reconnect individual mitochondria in cells to remodel this energy-producing organelle. Although dynamin-related protein 1 (Drp1) plays a major role in mitochondrial division in cells, a reduced level of mitochondrial division still persists even in the absence of Drp1. It is unknown how much Drp1-mediated mitochondrial division accounts for the connectivity of mitochondria. The role of a Parkinson's disease-associated protein-parkin, which biochemically and genetically interacts with Drp1-in mitochondrial connectivity also remains poorly understood. Here, we quantified the number and connectivity of mitochondria using mitochondria-targeted photoactivatable GFP in cells. We show that the loss of Drp1 increases the connectivity of mitochondria by 15-fold in mouse embryonic fibroblasts (MEFs). While a single loss of parkin does not affect the connectivity of mitochondria, the connectivity of mitochondria significantly decreased compared with a single loss of Drp1 when parkin was lost in the absence of Drp1. Furthermore, the loss of parkin decreased the frequency of depolarization of the mitochondrial inner membrane that is caused by increased mitochondrial connectivity in Drp1-knockout MEFs. Therefore, our data suggest that parkin negatively regulates Drp1-indendent mitochondrial division. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Duration of division-related events in cleaving sand dollar eggs.

    PubMed

    Rappaport, R; Rappaport, B N

    1993-07-01

    A minimal mechanism for cytokinesis comprises a stimulus-to-surface contraction, a receptive surface, and a localized surface contractile mechanism. Duration of each is brief and times when they function are predictable. The processes that begin and end the functional period of each component were investigated. Sand dollar blastomeres from the completion of first cleavage to the beginning of fourth cleavage were used. By changing a cell's shape, it was possible to determine whether its capacity to accomplish an activity is restricted to its usual time frame. The first appearance of the furrow was advanced about 5 min by confining the mitotic apparatus in a narrow cytoplasmic cylinder. The period when the mitotic apparatus induces furrowing was prolonged about 18 min by moving the mitotic apparatus in an elongate cell each time the furrow appeared. The period of active furrowing was prolonged to about 21.8 min by pushing the mitotic apparatus close to the cell margin and then stretching the region through which the unilateral furrow must pass. In relation to normal division cycle events, results showed that each event of cytokinesis can operate both before and after its normal active period. Components of the mechanism are capable of functioning for about half the period of the division cycle. Normal timing of events may be determined by geometrical factors and the normal consequences of each activity.

  5. Academic and Career Advancement for Black Male Athletes at NCAA Division I Institutions

    ERIC Educational Resources Information Center

    Baker, Ashley R.; Hawkins, Billy J.

    2016-01-01

    This chapter examines the structural arrangements and challenges many Black male athletes encounter as a result of their use of sport for upward social mobility. Recommendations to enhance their preparation and advancement are provided.

  6. A Survey of Past Work on Rates of Advance in Land Combat Operations

    DTIC Science & Technology

    1990-02-01

    Weather: Clear, mild, cold, snow, fog, rain, misty , fair-again objective definitions are not provided. c. Type action: Pursuit, limited objective...90-03 SIMPKIN-1984 J 1. Document: a. Title: Red Armour b. Author: Simpkin, Richard E. c. Date: 1984 d. Organiation: Pergamon Press Ltd, Oxford e...8217Mechanised advances on roads average about 35 miles,’day, advances on the best days achieving 50 miles for typical armoured division wheel/track mixes

  7. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  8. Division V: Commission 42: Close Binaries

    NASA Astrophysics Data System (ADS)

    Ribas, Ignasi; Richards, Mercedes T.; Rucinski, Slavek; Bradstreet, David H.; Harmanec, Petr; Kaluzny, Janusz; Mikolajewska, Joanna; Munari, Ulisse; Niarchos, Panagiotis; Olah, Katalin; Pribulla, Theodor; Scarfe, Colin D.; Torres, Guillermo

    2015-08-01

    Commission 42 (C42) co-organized, together with Commission 27 (C27) and Division V (Div V) as a whole, a full day of science and business sessions that were held on 24 August 2012. The program included time slots for discussion of business matters related to Div V, C27 and C42, and two sessions of 2 hours each devoted to science talks of interest to both C42 and C27. In addition, we had a joint session between Div IV and Div V motivated by the proposal to reformulate the division structure of the IAU and the possible merger of the two divisions into a new Div G. The current report gives an account of the matters discussed during the business session of C42.

  9. Cognitive and Neural Sciences Division, 1991 Programs.

    ERIC Educational Resources Information Center

    Vaughan, Willard S., Ed.

    This report documents research and development performed under the sponsorship of the Cognitive and Neural Sciences Division of the Office of Naval Research in fiscal year 1991. It provides abstracts (title, principal investigator, project code, objective, approach, progress, and related reports) of projects of three program divisions (cognitive…

  10. Space Science Division cumulative bibliography: 1989-1994

    NASA Technical Reports Server (NTRS)

    Morrison, D.

    1995-01-01

    The Space Science Division at NASA's Ames Research Center is dedicated to research in astrophysics, exobiology, and planetary science. These research programs are structured around the study of origins and evolution of stars, planets, planetary atmospheres, and life, and address some of the most fundamental questions pursued by science; questions that examine the origin of life and of our place in the universe. This bibliography is the accumulation of peer-reviewed publications authored by Division scientists for the years 1989 through 1994. The list includes 777 papers published in over 5 dozen scientific journals representing the high productivity and interdisciplinary nature of the Space Science Division.

  11. Division Planes Alternate in Spherical Cells of Escherichia coli

    PubMed Central

    Begg, K. J.; Donachie, W. D.

    1998-01-01

    In the spherical cells of Escherichia coli rodA mutants, division is initiated at a single point, from which a furrow extends progressively around the cell. Using “giant” rodA ftsA cells, we confirmed that each new division furrow is initiated at the midpoint of the previous division plane and runs perpendicular to it. PMID:9573213

  12. Research Networks Map | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention supports major scientific collaborations and research networks at more than 100 sites across the United States. Seven Major Programs' sites are shown on this map. | The Division of Cancer Prevention supports major scientific collaborations and research networks at more than 100 sites across the United States.

  13. Cognitive and Neural Sciences Division 1990 Programs.

    ERIC Educational Resources Information Center

    Vaughan, Willard S., Jr., Ed.

    Research and development efforts carried out under sponsorship of the Cognitive and Neural Sciences Division of the Office of Naval Research during fiscal year 1990 are described in this compilation of project description summaries. The Division's research is organized in three types of programs: (1) Cognitive Science (the human learner--cognitive…

  14. The Changing Nature of Division III Athletics

    ERIC Educational Resources Information Center

    Beaver, William

    2014-01-01

    Non-selective Division III institutions often face challenges in meeting their enrollment goals. To ensure their continued viability, these schools recruit large numbers of student athletes. As a result, when compared to FBS (Football Bowl Division) institutions these schools have a much higher percentage of student athletes on campus and a…

  15. Understanding Division of Fractions: An Alternative View

    ERIC Educational Resources Information Center

    Fredua-Kwarteng, E.; Ahia, Francis

    2006-01-01

    The purpose of this paper is to offer three alternatives to patterns or visualization used to justify division of fraction "algorithm" invert and multiply". The three main approaches are historical, similar denominators and algebraic, that teachers could use to justify the standard algorithm of division of fraction. The historical approach uses…

  16. 3. Oblique view of 215 Division Street, looking southeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Oblique view of 215 Division Street, looking southeast, showing rear (west) facade and north side, Fairbanks Company appears at left and 215 Division Street is visible at right - 215 Division Street (House), Rome, Floyd County, GA

  17. 2. Oblique view of 215 Division Street, looking northeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Oblique view of 215 Division Street, looking northeast, showing rear (west) facade and south side, 217 Division Street is visible at left and Fairbanks Company appears at right - 215 Division Street (House), Rome, Floyd County, GA

  18. 3. Oblique view of 213 Division Street, looking northeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Oblique view of 213 Division Street, looking northeast, showing rear (west) facade and south side, 215 Division Street is visible at left and Fairbanks Company appears at right - 213 Division Street (House), Rome, Floyd County, GA

  19. The Impact of Reclassification from Division II to DI-AA and from Division I-AA to I-A on NCAA Member Institutions from 1993 to 2003

    ERIC Educational Resources Information Center

    Frieder, Laura L., Comp.; Fulks, Daniel L., Comp.

    2007-01-01

    Recent years have seen a number of National Collegiate Athletic Association (NCAA) Division II institutions seeking reclassification to Division I-AA and Division I-AA institutions moving to Division I-A. Yet, other schools that seem like natural candidates to reclassify have resisted. The purpose of this study is to investigate the impact of the…

  20. Division of Agriculture

    Science.gov Websites

    Department of Natural Resources logo, color scheme Department of Natural Resources Division of Agriculture Search Search DNR's site DNR State of Alaska Toggle main menu visibility Agriculture Home Programs Asset Disposals Alaska Caps Progam Board of Agriculture & Conservation Farm To School Program Grants

  1. 6. Contextual view of Fairbanks Company, looking south along Division ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Contextual view of Fairbanks Company, looking south along Division Street, showing relationship of factory to surrounding area, 213, 215, & 217 Division Street appear on right side of street - Fairbanks Company, 202 Division Street, Rome, Floyd County, GA

  2. Publications - GMC 244 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Ridge Unit #1 well Authors: DGSI, Inc. Publication Date: 1995 Publisher: Alaska Division of Geological Union Oil Company of California Trail Ridge Unit #1 well: Alaska Division of Geological &

  3. JTF CapMed Warrior Transition Division

    DTIC Science & Technology

    2011-01-25

    The Quadruple Aim: Working Together, Achieving Success 2011 Military Health System Conference JTF CapMed Warrior Transition Division 25 January 2011...Colonel Julia Adams 1 Military Health System Conference Joint Task Force National Capital Region Medical (JTF CapMed ) Report Documentation Page Form...DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE JTF CapMed Warrior Transition Division 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  4. Publications - STATEMAP Project | Alaska Division of Geological &

    Science.gov Websites

    ., 2008, Surficial-geologic map of the Salcha River-Pogo area, Big Delta Quadrangle, Alaska: Alaska , Engineering - geologic map, Alaska Highway corridor, Delta Junction to Dot Lake, Alaska: Alaska Division of geologic map of the Salcha River-Pogo area, Big Delta Quadrangle, Alaska: Alaska Division of Geological

  5. Timing the start of division in E. coli: a single-cell study

    NASA Astrophysics Data System (ADS)

    Reshes, G.; Vanounou, S.; Fishov, I.; Feingold, M.

    2008-12-01

    We monitor the shape dynamics of individual E. coli cells using time-lapse microscopy together with accurate image analysis. This allows measuring the dynamics of single-cell parameters throughout the cell cycle. In previous work, we have used this approach to characterize the main features of single-cell morphogenesis between successive divisions. Here, we focus on the behavior of the parameters that are related to cell division and study their variation over a population of 30 cells. In particular, we show that the single-cell data for the constriction width dynamics collapse onto a unique curve following appropriate rescaling of the corresponding variables. This suggests the presence of an underlying time scale that determines the rate at which the cell cycle advances in each individual cell. For the case of cell length dynamics a similar rescaling of variables emphasizes the presence of a breakpoint in the growth rate at the time when division starts, τc. We also find that the τc of individual cells is correlated with their generation time, τg, and inversely correlated with the corresponding length at birth, L0. Moreover, the extent of the T-period, τg - τc, is apparently independent of τg. The relations between τc, τg and L0 indicate possible compensation mechanisms that maintain cell length variability at about 10%. Similar behavior was observed for both fast-growing cells in a rich medium (LB) and for slower growth in a minimal medium (M9-glucose). To reveal the molecular mechanisms that lead to the observed organization of the cell cycle, we should further extend our approach to monitor the formation of the divisome.

  6. Gender, division of unpaid family work and psychological distress in dual-earner families.

    PubMed

    Tao, Wenting; Janzen, Bonnie L; Abonyi, Sylvia

    2010-06-18

    Epidemiological studies have only recently begun to address the consequences of unpaid family work (ie., housework and child rearing) for mental health. Although research is suggestive of an association between the division of unpaid family work and psychological health, especially for women, additional research is required to clarify the conditions under which such a relationship holds. The purpose of the present study was to examine more nuanced relationships between the division of family work and psychological distress by disaggregating the family work construct according to type (housework/child rearing), control over scheduling, and evaluations of fairness. Analysis of data obtained from a cross-sectional telephone survey conducted in a Canadian city. Analyses were based on 293 employed parents (182 mothers and 111 fathers), with at least one preschool child, living in dual-earner households. Several multiple linear regression models were estimated with psychological distress as the outcome, adjusting for confounders. For mothers, more perceived time spent in child rearing (particularly primary child care) and high-schedule-control housework tasks (e.g. yard work) relative to one's partner, were associated with greater distress. For fathers, perceived unfairness in the division of housework and child rearing were associated with greater distress. Although methodological limitations temper firm conclusions, these results suggest that the gendered nature of household work has implications for the psychological well-being of both mothers and fathers of preschool children in dual-earner households. However, more longitudinal research and the development of theoretically-informed measures of family work are needed to advance the field.

  7. History of Division 29, 1993-2013: another 20 years of psychotherapy.

    PubMed

    Williams, Elizabeth Nutt; Barnett, Jeffrey E; Canter, Mathilda B

    2013-03-01

    The history of Division 29 (Psychotherapy) of the American Psychological Association (APA) from 1993 to 2013 is reviewed. The 20 years of history can be traced via the Division's primary publications (the journal Psychotherapy and its newsletter Psychotherapy Bulletin) as well as the history of those who have served leadership roles in the Division and have won Divisional awards. Several recurring themes emerge related to the Division's articulations of its own identity, the Division's advocacy efforts vis-à-vis the profession and the APA, and the work of the Division on behalf of major social issues (such as disaster relief and the nation's health care).

  8. PHOTOCOPY OF DRAWING NO. F860, DIVISION AVENUE STATION, EAST ELEVATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PHOTOCOPY OF DRAWING NO. F-860, DIVISION AVENUE STATION, EAST ELEVATION AND DETAILS, DRAWN BY W.H.C., MAR. 22, 1915. COURTESY OF THE DEPARTMENT OF PUBLIC UTILITIES, DIVISION OF WATER, CITY OF CLEVELAND. - Division Avenue Pumping Station & Filtration Plant, West 45th Street and Division Avenue, Cleveland, Cuyahoga County, OH

  9. Division of Forestry

    Science.gov Websites

    . In 2011, the Alaska State Legislature added 23,181 acres of commercial forest lands to the existing Southeast State Forest. The Division conducts personal use, commercial timber, and fuel-wood sales. It Commercial Timber Sales General Firewood Permit Firewood Information Forestry GIS Website Reforestation

  10. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  11. Breaching The Ramparts: The 3rd Canadian Infantry Division’s Capture Of Boulogne In World War Two

    DTIC Science & Technology

    2016-05-26

    not expect the Germans to put up a fight, as air reconnaissance reported that the Boulogne, Calais, and Dunkirk areas were deserted.26 However, on...Abilene, KS. The First Army tasks included completing the capture of Boulogne followed by Calais, masking Dunkirk , and advancing to 11 Corps...and Ghent. Meanwhile, the 2nd Division was responsible to clear Dunkirk and the rest of coast from Calais to Dutch border. 43 Williams, The Long

  12. Genomic Advances to Improve Biomass for Biofuels (LBNL Science at the Theater)

    ScienceCinema

    Rokhsar, Daniel [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States)

    2018-05-24

    Lawrence Berkeley National Lab bioscientist Daniel Rokhsar discusses genomic advances to improve biomass for biofuels. He presented his talk Feb. 11, 2008 in Berkeley, California as part of Berkeley Lab's community lecture series. Rokhsar works with the U.S. Department of Energy's Joint Genome Institute and Berkeley Lab's Genomics Division.

  13. Genomic Advances to Improve Biomass for Biofuels (LBNL Science at the Theater)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rokhsar, Daniel

    2008-02-11

    Lawrence Berkeley National Lab bioscientist Daniel Rokhsar discusses genomic advances to improve biomass for biofuels. He presented his talk Feb. 11, 2008 in Berkeley, California as part of Berkeley Lab's community lecture series. Rokhsar works with the U.S. Department of Energy's Joint Genome Institute and Berkeley Lab's Genomics Division.

  14. Advanced ACTPol Cryogenic Detector Arrays and Readout

    NASA Astrophysics Data System (ADS)

    Henderson, S. W.; Allison, R.; Austermann, J.; Baildon, T.; Battaglia, N.; Beall, J. A.; Becker, D.; De Bernardis, F.; Bond, J. R.; Calabrese, E.; Choi, S. K.; Coughlin, K. P.; Crowley, K. T.; Datta, R.; Devlin, M. J.; Duff, S. M.; Dunkley, J.; Dünner, R.; van Engelen, A.; Gallardo, P. A.; Grace, E.; Hasselfield, M.; Hills, F.; Hilton, G. C.; Hincks, A. D.; Hloẑek, R.; Ho, S. P.; Hubmayr, J.; Huffenberger, K.; Hughes, J. P.; Irwin, K. D.; Koopman, B. J.; Kosowsky, A. B.; Li, D.; McMahon, J.; Munson, C.; Nati, F.; Newburgh, L.; Niemack, M. D.; Niraula, P.; Page, L. A.; Pappas, C. G.; Salatino, M.; Schillaci, A.; Schmitt, B. L.; Sehgal, N.; Sherwin, B. D.; Sievers, J. L.; Simon, S. M.; Spergel, D. N.; Staggs, S. T.; Stevens, J. R.; Thornton, R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol is a polarization-sensitive upgrade for the 6 m aperture Atacama Cosmology Telescope, adding new frequencies and increasing sensitivity over the previous ACTPol receiver. In 2016, Advanced ACTPol will begin to map approximately half the sky in five frequency bands (28-230 GHz). Its maps of primary and secondary cosmic microwave background anisotropies—imaged in intensity and polarization at few arcminute-scale resolution—will enable precision cosmological constraints and also a wide array of cross-correlation science that probes the expansion history of the universe and the growth of structure via gravitational collapse. To accomplish these scientific goals, the Advanced ACTPol receiver will be a significant upgrade to the ACTPol receiver, including four new multichroic arrays of cryogenic, feedhorn-coupled AlMn transition edge sensor polarimeters (fabricated on 150 mm diameter wafers); a system of continuously rotating meta-material silicon half-wave plates; and a new multiplexing readout architecture which uses superconducting quantum interference devices and time division to achieve a 64-row multiplexing factor. Here we present the status and scientific goals of the Advanced ACTPol instrument, emphasizing the design and implementation of the Advanced ACTPol cryogenic detector arrays.

  15. Advanced ACTPol Cryogenic Detector Arrays and Readout

    NASA Technical Reports Server (NTRS)

    Henderson, S.W.; Allison, R.; Austermann, J.; Baildon, T.; Battaglia, N.; Beall, J. A.; Becker, D.; De Bernardis, F.; Bond, J. R.; Wollack, E. J.

    2016-01-01

    Advanced ACTPol is a polarization-sensitive upgrade for the 6 m aperture Atacama Cosmology Telescope, adding new frequencies and increasing sensitivity over the previous ACTPol receiver. In 2016, Advanced ACTPol will begin to map approximately half the sky in five frequency bands (28-230 GHz). Its maps of primary and secondary cosmic microwave background anisotropies-imaged in intensity and polarization at few arcminute-scale resolution-will enable precision cosmological constraints and also awide array of cross-correlation science that probes the expansion history of the universe and the growth of structure via gravitational collapse. To accomplish these scientific goals, the AdvancedACTPol receiver will be a significant upgrade to the ACTPol receiver, including four new multichroic arrays of cryogenic, feedhorn-coupled AlMn transition edge sensor polarimeters (fabricated on 150 mm diameter wafers); a system of continuously rotating meta-material silicon half-wave plates; and a new multiplexing readout architecture which uses superconducting quantum interference devices and time division to achieve a 64-row multiplexing factor. Here we present the status and scientific goals of the Advanced ACTPol instrument, emphasizing the design and implementation of the AdvancedACTPol cryogenic detector arrays.

  16. ADP Analysis project for the Human Resources Management Division

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1993-01-01

    The ADP (Automated Data Processing) Analysis Project was conducted for the Human Resources Management Division (HRMD) of NASA's Langley Research Center. The three major areas of work in the project were computer support, automated inventory analysis, and an ADP study for the Division. The goal of the computer support work was to determine automation needs of Division personnel and help them solve computing problems. The goal of automated inventory analysis was to find a way to analyze installed software and usage on a Macintosh. Finally, the ADP functional systems study for the Division was designed to assess future HRMD needs concerning ADP organization and activities.

  17. Engineering Research Division publication report, calendar year 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, E.K.; Livingston, P.L.; Rae, D.C.

    Each year the Engineering Research Division of the Electronics Engineering Department at Lawrence Livermore Laboratory has issued an internal report listing all formal publications produced by the Division during the calendar year. Abstracts of 1980 reports are presented.

  18. Publications - GMC 138 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    . OCS Y-0211-1 (Yakutat #1) well Authors: Unknown Publication Date: 1989 Publisher: Alaska Division of of cuttings from the Arco Alaska Inc. OCS Y-0211-1 (Yakutat #1) well: Alaska Division of Geological

  19. Cell-Division Behavior in a Heterogeneous Swarm Environment.

    PubMed

    Erskine, Adam; Herrmann, J Michael

    2015-01-01

    We present a system of virtual particles that interact using simple kinetic rules. It is known that heterogeneous mixtures of particles can produce particularly interesting behaviors. Here we present a two-species three-dimensional swarm in which a behavior emerges that resembles cell division. We show that the dividing behavior exists across a narrow but finite band of parameters and for a wide range of population sizes. When executed in a two-dimensional environment the swarm's characteristics and dynamism manifest differently. In further experiments we show that repeated divisions can occur if the system is extended by a biased equilibrium process to control the split of populations. We propose that this repeated division behavior provides a simple model for cell-division mechanisms and is of interest for the formation of morphological structure and to swarm robotics.

  20. Chemical Engineering Division Activities

    ERIC Educational Resources Information Center

    Chemical Engineering Education, 1978

    1978-01-01

    The 1978 ASEE Chemical Engineering Division Lecturer was Theodore Vermeulen of the University of California at Berkeley. Other chemical engineers who received awards or special recognition at a recent ASEE annual conference are mentioned. (BB)

  1. Universal rule for the symmetric division of plant cells

    PubMed Central

    Besson, Sébastien; Dumais, Jacques

    2011-01-01

    The division of eukaryotic cells involves the assembly of complex cytoskeletal structures to exert the forces required for chromosome segregation and cytokinesis. In plants, empirical evidence suggests that tensional forces within the cytoskeleton cause cells to divide along the plane that minimizes the surface area of the cell plate (Errera’s rule) while creating daughter cells of equal size. However, exceptions to Errera’s rule cast doubt on whether a broadly applicable rule can be formulated for plant cell division. Here, we show that the selection of the plane of division involves a competition between alternative configurations whose geometries represent local area minima. We find that the probability of observing a particular division configuration increases inversely with its relative area according to an exponential probability distribution known as the Gibbs measure. Moreover, a comparison across land plants and their most recent algal ancestors confirms that the probability distribution is widely conserved and independent of cell shape and size. Using a maximum entropy formulation, we show that this empirical division rule is predicted by the dynamics of the tense cytoskeletal elements that lead to the positioning of the preprophase band. Based on the fact that the division plane is selected from the sole interaction of the cytoskeleton with cell shape, we posit that the new rule represents the default mechanism for plant cell division when internal or external cues are absent. PMID:21383128

  2. Towers of generalized divisible quantum codes

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  3. Advances in Solid State Physics

    NASA Astrophysics Data System (ADS)

    Kramer, Bernhard

    The present volume 45 of Advances in Solid-State Physics contains the written versions of selected invited lectures from the spring meeting of the Arbeitskreis Festkörperphysik of the Deutsche Physikalische Gesellschaft in the World Year of Physics 2005, the Einstein Year, which was held from 4 - 11 March 2005 in Berlin, Germany. Many topical talks given at the numerous symposia are included. Most of these were organized collaboratively by several of the divisions of the Arbeitskreis. The book presents, to some extent, the status of the field of solid-state physics in 2005 not only in Germany but also internationally.

  4. Physics Division annual report 2004.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glover, J.

    2006-04-06

    continues to lead in the development and exploitation of the new technical concepts that will truly make RIA, in the words of NSAC, ''the world-leading facility for research in nuclear structure and nuclear astrophysics''. The performance standards for new classes of superconducting cavities continue to increase. Driver linac transients and faults have been analyzed to understand reliability issues and failure modes. Liquid-lithium targets were shown to successfully survive the full-power deposition of a RIA beam. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for RIA holds the keys to unlocking important secrets of nature. The work described here shows how far we have come and makes it clear we know the path to meet these intellectual challenges. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.« less

  5. Peptidoglycan architecture can specify division planes in Staphylococcus aureus.

    PubMed

    Turner, Robert D; Ratcliffe, Emma C; Wheeler, Richard; Golestanian, Ramin; Hobbs, Jamie K; Foster, Simon J

    2010-06-15

    Division in Staphylococci occurs equatorially and on specific sequentially orthogonal planes in three dimensions, resulting, after incomplete cell separation, in the 'bunch of grapes' cluster organization that defines the genus. The shape of Staphylococci is principally maintained by peptidoglycan. In this study, we use Atomic Force Microscopy (AFM) and fluorescence microscopy with vancomycin labelling to examine purified peptidoglycan architecture and its dynamics in Staphylococcus aureus and correlate these with the cell cycle. At the presumptive septum, cells were found to form a large belt of peptidoglycan in the division plane before the centripetal formation of the septal disc; this often had a 'piecrust' texture. After division, the structures remain as orthogonal ribs, encoding the location of past division planes in the cell wall. We propose that this epigenetic information is used to enable S. aureus to divide in sequentially orthogonal planes, explaining how a spherical organism can maintain division plane localization with fidelity over many generations.

  6. CELL DIVISION IN A SPECIES OF ERWINIA. III. REVERSAL OF INHIBITION OF CELL DIVISION CAUSED BY D-AMINO ACIDS, PENICILLIN, AND ULTRA-VIOLET LIGHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grula, E.A.; Grula, M.M.

    Inhibition of cell division in an Erwinia sp. occurs in the presence of any of six D-amino acids, penicillin, or ultraviolet light. Cell-division inhibition caused by D-amino acids is pH-dependent; however, elongation caused by penicillin occurs over a wide range of pH. Bulging and spheroplast formation in the presence of penicillin occurs only at pH values below 7.6; however, division continues to be inhibited at higher pH levels. Reversal of cell-division inhibition caused by two D-amino acids (phenylalanine and histidine) can be partially overcome by their respective L-isomers. Divalent cations (Zn, Ca, Mn) cause varying amounts of reversal of divisionmore » inhibition in all systems studied; each system appears to have an individual requirement. All induced division inhibitions, including that caused by penicillin, can be reversed by pantoyl lactone or omega methylpantoyl lactone. Evidence is presented and discussed concerning the possible importance of pantoyl lactone and divalent cations in terminal steps of the cell-division process in this organism. (auth)« less

  7. Cumulative Damage Model for Advanced Composite Materials.

    DTIC Science & Technology

    1982-07-01

    STANDARS 963-A AFWAL- TR- 82-4094 CUMULATIVE DAMAGE MODEL FOR ADVANCED COMPOSITE MATERIALS GENERAL DYNAMICS FORT WORTH DIVISION P. 0. BOX 748 FORT...WORTH, TEXAS 76101 July 1982 Final Report for Period 23 February 1981 to 23 May 19k2. Approved. for public rel ts ; dA.st ? ,* -i; .c- ,. a-. LJ ( MAR 2... procurement operation, the United Scat-.s Government thereby Incurr no responsibility nor any obligation whatsoever; and the fact t.’at the government may

  8. A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM)

    DTIC Science & Technology

    2017-10-01

    TECHNICAL REPORT 3079 October 2017 A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM...Head 55190 Networks Division iii EXECUTIVE SUMMARY This report summarizes the methodology developed to improve the radar threshold modeling...PHASED ARRAY RADAR CONFIGURATION ..................................................................... 1 3. METHODOLOGY

  9. Contacts in the Office of Pesticide Programs, Registration Division

    EPA Pesticide Factsheets

    The Registration Division (RD) is responsible product registrations, amendments, registrations, tolerances, experimental use permits, and emergency exemptions for conventional chemical pesticides. Find contacts in this division.

  10. 49 CFR 176.605 - Care following leakage or sifting of Division 2.3 (poisonous gas) and Division 6.1 (poisonous...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Care following leakage or sifting of Division 2.3... Regulations Relating to Transportation PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF... (Poisonous Gas) and Division 6.1 (Poisonous) Materials § 176.605 Care following leakage or sifting of...

  11. 49 CFR 176.605 - Care following leakage or sifting of Division 2.3 (poisonous gas) and Division 6.1 (poisonous...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Care following leakage or sifting of Division 2.3... Regulations Relating to Transportation PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF... (Poisonous Gas) and Division 6.1 (Poisonous) Materials § 176.605 Care following leakage or sifting of...

  12. Oriented cell division: new roles in guiding skin wound repair and regeneration

    PubMed Central

    Yang, Shaowei; Ma, Kui; Geng, Zhijun; Sun, Xiaoyan; Fu, Xiaobing

    2015-01-01

    Tissue morphogenesis depends on precise regulation and timely co-ordination of cell division and also on the control of the direction of cell division. Establishment of polarity division axis, correct alignment of the mitotic spindle, segregation of fate determinants equally or unequally between daughter cells, are essential for the realization of oriented cell division. Furthermore, oriented cell division is regulated by intrinsic cues, extrinsic cues and other cues, such as cell geometry and polarity. However, dysregulation of cell division orientation could lead to abnormal tissue development and function. In the present study, we review recent studies on the molecular mechanism of cell division orientation and explain their new roles in skin repair and regeneration. PMID:26582817

  13. THE WESTERN ECOLOGY DIVISION STUDENT INTERN PROGRAM VIDEO

    EPA Science Inventory

    The Western Ecology Division of the National Health & Environmental Effects Research Laboratory has produced a 15 minute video documenting the internship program at the Division. The video highlights various CWEST student interns reporting on their experiences at an end-of-the-s...

  14. "American Gothic" and the Division of Labor.

    ERIC Educational Resources Information Center

    Saunders, Robert J.

    1987-01-01

    Provides historical review of gender-based division of labor. Argues that gender-based division of labor served a purpose in survival of tribal communities but has lost meaning today and may be a handicap to full use of human talent and ability in the arts. There is nothing in various art forms which make them more appropriate for males or…

  15. Pathogenic Chlamydia Lack a Classical Sacculus but Synthesize a Narrow, Mid-cell Peptidoglycan Ring, Regulated by MreB, for Cell Division

    PubMed Central

    Packiam, Mathanraj; Hsu, Yen-Pang; Tekkam, Srinivas; Hall, Edward; Rittichier, Jonathan T.; VanNieuwenhze, Michael; Brun, Yves V.; Maurelli, Anthony T.

    2016-01-01

    The peptidoglycan (PG) cell wall is a peptide cross-linked glycan polymer essential for bacterial division and maintenance of cell shape and hydrostatic pressure. Bacteria in the Chlamydiales were long thought to lack PG until recent advances in PG labeling technologies revealed the presence of this critical cell wall component in Chlamydia trachomatis. In this study, we utilize bio-orthogonal D-amino acid dipeptide probes combined with super-resolution microscopy to demonstrate that four pathogenic Chlamydiae species each possess a ≤ 140 nm wide PG ring limited to the division plane during the replicative phase of their developmental cycles. Assembly of this PG ring is rapid, processive, and linked to the bacterial actin-like protein, MreB. Both MreB polymerization and PG biosynthesis occur only in the intracellular form of pathogenic Chlamydia and are required for cell enlargement, division, and transition between the microbe’s developmental forms. Our kinetic, molecular, and biochemical analyses suggest that the development of this limited, transient, PG ring structure is the result of pathoadaptation by Chlamydia to an intracellular niche within its vertebrate host. PMID:27144308

  16. Pathogenic Chlamydia Lack a Classical Sacculus but Synthesize a Narrow, Mid-cell Peptidoglycan Ring, Regulated by MreB, for Cell Division.

    PubMed

    Liechti, George; Kuru, Erkin; Packiam, Mathanraj; Hsu, Yen-Pang; Tekkam, Srinivas; Hall, Edward; Rittichier, Jonathan T; VanNieuwenhze, Michael; Brun, Yves V; Maurelli, Anthony T

    2016-05-01

    The peptidoglycan (PG) cell wall is a peptide cross-linked glycan polymer essential for bacterial division and maintenance of cell shape and hydrostatic pressure. Bacteria in the Chlamydiales were long thought to lack PG until recent advances in PG labeling technologies revealed the presence of this critical cell wall component in Chlamydia trachomatis. In this study, we utilize bio-orthogonal D-amino acid dipeptide probes combined with super-resolution microscopy to demonstrate that four pathogenic Chlamydiae species each possess a ≤ 140 nm wide PG ring limited to the division plane during the replicative phase of their developmental cycles. Assembly of this PG ring is rapid, processive, and linked to the bacterial actin-like protein, MreB. Both MreB polymerization and PG biosynthesis occur only in the intracellular form of pathogenic Chlamydia and are required for cell enlargement, division, and transition between the microbe's developmental forms. Our kinetic, molecular, and biochemical analyses suggest that the development of this limited, transient, PG ring structure is the result of pathoadaptation by Chlamydia to an intracellular niche within its vertebrate host.

  17. Division and dynamic morphology of plastids.

    PubMed

    Osteryoung, Katherine W; Pyke, Kevin A

    2014-01-01

    Plastid division is fundamental to the biology of plant cells. Division by binary fission entails the coordinated assembly and constriction of four concentric rings, two internal and two external to the organelle. The internal FtsZ ring and external dynamin-like ARC5/DRP5B ring are connected across the two envelopes by the membrane proteins ARC6, PARC6, PDV1, and PDV2. Assembly-stimulated GTPase activity drives constriction of the FtsZ and ARC5/DRP5B rings, which together with the plastid-dividing rings pull and squeeze the envelope membranes until the two daughter plastids are formed, with the final separation requiring additional proteins. The positioning of the division machinery is controlled by the chloroplast Min system, which confines FtsZ-ring formation to the plastid midpoint. The dynamic morphology of plastids, especially nongreen plastids, is also considered here, particularly in relation to the production of stromules and plastid-derived vesicles and their possible roles in cellular communication and plastid functionality.

  18. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  19. Cell division plane orientation based on tensile stress in Arabidopsis thaliana

    PubMed Central

    Louveaux, Marion; Julien, Jean-Daniel; Mirabet, Vincent; Boudaoud, Arezki; Hamant, Olivier

    2016-01-01

    Cell geometry has long been proposed to play a key role in the orientation of symmetric cell division planes. In particular, the recently proposed Besson–Dumais rule generalizes Errera’s rule and predicts that cells divide along one of the local minima of plane area. However, this rule has been tested only on tissues with rather local spherical shape and homogeneous growth. Here, we tested the application of the Besson–Dumais rule to the divisions occurring in the Arabidopsis shoot apex, which contains domains with anisotropic curvature and differential growth. We found that the Besson–Dumais rule works well in the central part of the apex, but fails to account for cell division planes in the saddle-shaped boundary region. Because curvature anisotropy and differential growth prescribe directional tensile stress in that region, we tested the putative contribution of anisotropic stress fields to cell division plane orientation at the shoot apex. To do so, we compared two division rules: geometrical (new plane along the shortest path) and mechanical (new plane along maximal tension). The mechanical division rule reproduced the enrichment of long planes observed in the boundary region. Experimental perturbation of mechanical stress pattern further supported a contribution of anisotropic tensile stress in division plane orientation. Importantly, simulations of tissues growing in an isotropic stress field, and dividing along maximal tension, provided division plane distributions comparable to those obtained with the geometrical rule. We thus propose that division plane orientation by tensile stress offers a general rule for symmetric cell division in plants. PMID:27436908

  20. Publications - AR 2005 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Geologic Materials Center General Information Inventory Monthly Report Hours and Location Policy Report Authors: DGGS Staff Publication Date: Feb 2006 Publisher: Alaska Division of Geological & Geological & Geophysical Surveys Annual Report: Alaska Division of Geological & Geophysical Surveys