Science.gov

Sample records for advanced supercomputing division

  1. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  2. Advanced architectures for astrophysical supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.

    2012-01-01

    This thesis explores the substantial benefits offered to astronomy research by advanced 'many-core' computing architectures, which can provide up to ten times more computing power than traditional processors. It begins by analysing the computations that are best suited to massively parallel computing and advocates a powerful, general approach to the use of many-core devices. These concepts are then put into practice to develop a fast data processing pipeline, with which new science outcomes are achieved in the field of pulsar astronomy, including the discovery of a new star. The work demonstrates how technology originally developed for the consumer market can now be used to accelerate the rate of scientific discovery.

  3. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  4. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  5. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    SciTech Connect

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  6. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2014-06-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  7. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  8. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  9. Technology transfer in the NASA Ames Advanced Life Support Division

    NASA Technical Reports Server (NTRS)

    Connell, Kathleen; Schlater, Nelson; Bilardo, Vincent; Masson, Paul

    1992-01-01

    This paper summarizes a representative set of technology transfer activities which are currently underway in the Advanced Life Support Division of the Ames Research Center. Five specific NASA-funded research or technology development projects are synopsized that are resulting in transfer of technology in one or more of four main 'arenas:' (1) intra-NASA, (2) intra-Federal, (3) NASA - aerospace industry, and (4) aerospace industry - broader economy. Each project is summarized as a case history, specific issues are identified, and recommendations are formulated based on the lessons learned as a result of each project.

  10. An assessment of worldwide supercomputer usage

    SciTech Connect

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  11. Improved orthogonal frequency division multiplexing communications through advanced coding

    NASA Astrophysics Data System (ADS)

    Westra, Jeffrey; Patti, John

    2005-08-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a communications technique that transmits a signal over multiple, evenly spaced, discrete frequency bands. OFDM offers some advantages over traditional, single-carrier modulation techniques, such as increased immunity to inter-symbol interference. For this reason OFDM is an attractive candidate for sensor network application; it has already been included in several standards, including Digital Audio Broadcast (DAB); digital television standards in Europe, Japan and Australia; asymmetric digital subscriber line (ASDL); and wireless local area networks (WLAN), specifically IEEE 802.11a. Many of these applications currently make use of a standard convolutional code with Viterbi decoding to perform forward error correction (FEC). Replacing such convolutional codes with advanced coding techniques using iterative decoding, such as Turbo codes, can substantially improve the performance of the OFDM communications link. This paper demonstrates such improvements using the 802.11a wireless LAN standard.

  12. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  13. Payment of Advanced Placement Exam Fees by Virginia Public School Divisions and Its Impact on Advanced Placement Enrollment and Scores

    ERIC Educational Resources Information Center

    Cirillo, Mary Grupe

    2010-01-01

    The purpose of this study was to determine the impact of Virginia school divisions' policy of paying the fee for students to take Advanced Placement exams on Advanced Placement course enrollment, the number of Advanced Placement exams taken by students, the average scores earned and the percent of students earning qualifying scores of 3, 4, or 5…

  14. Proceedings of IEEE supercomputing '88

    SciTech Connect

    Not Available

    1988-01-01

    These proceedings contain 61 papers grouped under the headings of: Program development; Horizon; a new supercomputer development; Dataflow systems; Compiler evaluation; Visualization; Compiler technology; Operating systems for supercomputing; Mass storage systems-1; Supercomputer performance; Mass storage systems-11; Supercomputer benchmarking; Supercomputer architecture-1; Training and education; Architecture-11; Algorithms-1; Algorithms-11; and Supercomputing center management.

  15. Advances in nickel hydrogen technology at Yardney Battery Division

    NASA Technical Reports Server (NTRS)

    Bentley, J. G.; Hall, A. M.

    1987-01-01

    The current major activites in nickel hydrogen technology being addressed at Yardney Battery Division are outlined. Five basic topics are covered: an update on life cycle testing of ManTech 50 AH NiH2 cells in the LEO regime; an overview of the Air Force/industry briefing; nickel electrode process upgrading; 4.5 inch cell development; and bipolar NiH2 battery development.

  16. Sandia`s research network for Supercomputing `93: A demonstration of advanced technologies for building high-performance networks

    SciTech Connect

    Gossage, S.A.; Vahle, M.O.

    1993-12-01

    Supercomputing `93, a high-performance computing and communications conference, was held November 15th through 19th, 1993 in Portland, Oregon. For the past two years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1993 conference, the results of Sandia`s efforts in exploring and utilizing Asynchronous Transfer Mode (ATM) and Synchronous Optical Network (SONET) technologies were vividly demonstrated by building and operating three distinct networks. The networks encompassed a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second, an ATM network running on a SONET circuit at the Optical Carrier (OC) rate of 155.52 megabits per second, and a High Performance Parallel Interface (HIPPI) network running over a 622.08 megabits per second SONET circuit. The SMDS and ATM networks extended from Albuquerque, New Mexico to the showroom floor, while the HIPPI/SONET network extended from Beaverton, Oregon to the showroom floor. This paper documents and describes these networks.

  17. Supercomputing in Aerospace

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Yee, Helen

    1987-01-01

    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.

  18. Supercomputing with VLSI

    SciTech Connect

    Manohar, S.

    1989-01-01

    Supercoprocessors (SCPs), highly parallel VLSI architectures tuned to solving a specific problem class, are shown to provide a means of cost-effective supercomputing. A methodology for building SCPs for different computation-intensive problems is described: two pragmatic constraints namely problem-size-independence and limited-bandwidth constraint are imposed on special-purposes architectures; a simple but powerful model of computation is used to derive general upper bounds on the speedup obtainable using such architectures. It is shown that bounds established by other authors for matrix multiplication and sorting, using problem-specific approaches, can be derived very simply using this model. Poisson Engine-I (PE-I), a prototype SCP, is a system for solving the Laplace equation using the finite difference approximation. PE-I uses a novel approach: asynchronous iteration methods are implemented using a fixed-size, synchronous array of simple processing elements. Architectural and algorithmic extensions to PE-I are briefly considered: the solution of a wider class of PDEs and the use of more sophisticated algorithms like the multigrid method are some of the issues addressed. The SCP methodology is applied to the problems of matrix-multiplication and sorting. For sorting, an SCP with superlinear speedup is outlined. For the matrix problem, the architecture and implementation details of SMP are described in detail. SMP, realized with about fifty chips using current technology, is capable of through-puts greater than 150 Mflops, and is also unique in being optimal with respect to the lowerbound derived using the SCP model. The use of a collection of sup SCPs is advanced as a cost-effective supercomputing alternative.

  19. The use of supercomputers in stellar dynamics; Proceedings of the Workshop, Institute for Advanced Study, Princeton, NJ, June 2-4, 1986

    NASA Astrophysics Data System (ADS)

    Hut, Piet; McMillan, Stephen L. W.

    Various papers on the use of supercomputers in stellar dynamics are presented. Individual topics addressed include: dynamical evolution of globular clusters, disk galaxy dynamics on the computer, mathematical models of star cluster dynamics, models of hot stellar systems, supercomputers and large cosmological N-body simulations, the architecture of a homogeneous vector supercomputer, the BBN multiprocessors Butterfly and Monarch, the Connection Machine, a digital Orrery, and the outer solar system for 200 million years. Also considered are: application of smooth particle hydrodynamics theory to lunar origin, multiple mesh techniques for modeling interacting galaxies, numerical experiments on galactic halo formation, numerical integration using explicit Taylor series, multiple-mesh-particle scheme for N-body simulation, direct N-body simulation on supercomputers, vectorization of small-N integrators, N-body integrations using supercomputers, a gridless Fourier method, techniques and tricks for N-body computation.

  20. Emerging supercomputer architectures

    SciTech Connect

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  1. Distributed supercomputing using ACTS

    SciTech Connect

    Konchady, M.

    1996-12-31

    Climate models to study Phenomena such as El Nino have been extremely useful in developing predictions and understanding global climate change. The high cost of running extended simulations necessary to substantiate theories can be reduced by using a network of supercomputers. A coupled ocean-atmosphere model has been implemented on a network of three Cray supercomputers.

  2. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  3. Advanced Reactor Safety Research Division. Quarterly progress report, January 1-March 31, 1980

    SciTech Connect

    Agrawal, A.K.; Cerbone, R.J.; Sastre, C.

    1980-06-01

    The Advanced Reactor Safety Research Programs quarterly progress report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: HTGR Safety Evaluation, SSC Code Development, LMFBR Safety Experiments, and Fast Reactor Safety Code Validation.

  4. What Is the Relationship between Emotional Intelligence and Administrative Advancement in an Urban School Division?

    ERIC Educational Resources Information Center

    Roberson, Elizabeth W.

    2010-01-01

    The purpose of this research was to study the relationship between emotional intelligence and administrative advancement in one urban school division; however, data acquired in the course of study may have revealed areas that could be further developed in future studies to increase the efficacy of principals and, perhaps, to inform the selection…

  5. Parallel supercomputing today and the cedar approach.

    PubMed

    Kuck, D J; Davidson, E S; Lawrie, D H; Sameh, A H

    1986-02-28

    More and more scientists and engineers are becoming interested in using supercomputers. Earlier barriers to using these machines are disappearing as software for their use improves. Meanwhile, new parallel supercomputer architectures are emerging that may provide rapid growth in performance. These systems may use a large number of processors with an intricate memory system that is both parallel and hierarchical; they will require even more advanced software. Compilers that restructure user programs to exploit the machine organization seem to be essential. A wide range of algorithms and applications is being developed in an effort to provide high parallel processing performance in many fields. The Cedar supercomputer, presently operating with eight processors in parallel, uses advanced system and applications software developed at the University of Illinois during the past 12 years. This software should allow the number of processors in Cedar to be doubled annually, providing rapid performance advances in the next decade. PMID:17740294

  6. Supercomputing the Climate

    NASA Video Gallery

    Goddard Space Flight Center is the home of a state-of-the-art supercomputing facility called the NASA Center for Climate Simulation (NCCS) that is capable of running highly complex models to help s...

  7. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2010-01-08

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing?everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  8. Energy Efficient Supercomputing

    SciTech Connect

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  9. Parallel supercomputing: Advanced methods, algorithms and software for large-scale problems. Final report, August 1, 1987--July 31, 1994

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1994-12-31

    The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software for complex applications such as those in coupled fluid flow and heat transfer. The research has been directed principally toward the solution of large-scale PDE problems using iterative solvers for finite differences and finite elements on advanced computer architectures. This work embraces parallel domain decomposition, element-by-element, spectral, and multilevel schemes with adaptive parameter determination, rational iteration and related issues. In addition to the fundamental questions related to developing new methods and mapping these to parallel computers, there are important software issues. The group has played a significant role in the development of software both for iterative solvers and also for finite element codes. The research in computational fluid dynamics (CFD) led to sustained multi-Gigaflop performance rates for parallel-vector computations of realistic large scale applications (not computational kernels alone). The main application areas for these performance studies have been two-dimensional problems in CFD. Over the course of this DOE sponsored research significant progress has been made. A report of the progression of the research is given and at the end of the report is a list of related publications and presentations over the entire grant period.

  10. A training program for scientific supercomputing users

    SciTech Connect

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  11. Supercomputer makers of Tokyo

    SciTech Connect

    Davis, N.

    1987-03-01

    Computers manufactured by the Japanese companies NEC, Hitachi and Fujitsu are encroaching on the capabilities of supercomputers produced by Cray. The Japan National Aerospace Lab is to receive a supercomputer, while other machines are being applied to fast breeder reactor and fusion studies. A 140 Mflop device is being used by a jet-engine and turbo-pump manufacturer and several national universities are buying mainframes for cut-rate prices from Japanese manufacturers who wish to encourage engineering students to work for the manufacturers. A Cray X-MP is now owned by Nissan and used to accelerate car RandD. The same company will design and produce the strap-on boosters for the H-II launch vehicle. Also noted are research efforts to develop machines an order of magnitude more powerful than current supercomputers.

  12. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  13. Energy sciences supercomputing 1990

    SciTech Connect

    Mirin, A.A.; Kaiper, G.V.

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  14. Effective Use of Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Kramer, William T. C.; Craw, James M.

    1989-01-01

    The effective use of a supercomputer depends on many aspects, including the ability of users to write efficient programs to use the resources of the system in an optimal manner. However, it is the responsibility of the system managers of these systems to insure the maximum effectiveness of the overall system is achieved. Many varying techniques have been developed at the Numerical Aerodynamic Simulation (NAS) Program to advance the management of these critical systems. Many of the issues and techniques used for managing supercomputers are common to multi-user UNIX systems, regardless of the version of UNIX or the power of the hardware. However, a UNICOS supercomputer presents some special challenges and requires additional features and tools to be developed to effectively manage the system. Only part of the challenge is related to performance monitoring and improvement. Much of the responsibility of the system manager is to provide fair and consistent access to the system resources, which is at times a difficult problem. After an introduction to the environment at the Numerical Aerodynamic Simulation Project is given as background this paper will first discuss the areas which are common to UNDC system management. Then the paper will discuss the specific areas of UNICOS which must be used to operate the system efficiently. The paper goes on to discuss the methods of supporting individual users in order to increase their effectiveness and the efficiency of their programs. This is accomplished through a professional support staff who interact on a daily basis to support the NAS scientific client community.

  15. Supercomputers: Super-polluters?

    SciTech Connect

    Mills, Evan; Mills, Evan; Tschudi, William; Shalf, John; Simon, Horst

    2008-04-08

    Thanks to imperatives for limiting waste heat, maximizing performance, and controlling operating cost, energy efficiency has been a driving force in the evolution of supercomputers. The challenge going forward will be to extend these gains to offset the steeply rising demands for computing services and performance.

  16. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  17. Supercomputing systems - A projection to 2000

    NASA Technical Reports Server (NTRS)

    Lundstrom, S. F.

    1990-01-01

    Advances in computer architecture, computer science, computational methods, and constituent technologies are expected to lead to significant advances in the performance of scientific supercomputing system capabilities over the next decade. By the year 2000, single 1-in-sq dies are projected to incorporate four processors, each of which would be operating faster than 750 million instructions per second (MIPS) for a total on-chip processing performance in excess of 2000 MIPS. Scalable parallel processors can be expected to contain thousands of such multiple processor chips. In general, semiconductor performance advances appear to change about one order of magnitude every five years. Rotating magnetic memory and communications technology are not advancing as rapidly, with the result that the allocation of functions within the system configurations fo future supercomputer systems will require important changes. Availability of massively parallel heterogeneous processing capabilities should be a catalyst leading to new approaches for applications.

  18. The use of supercomputers in the search for oil

    SciTech Connect

    Yang, R.L.

    1982-11-01

    Over the last decade, there have been significant advances made in the geophysical processing and reservoir simulation techniques. With these advances comes a rapidly increasing computing power requirement -especially in the areas of 3-dimensional seismic processing and reservoir simulation model studies. In response to this need for high computing power is the development and application of supercomputers for the energy problem solution. The use of supercomputers is now required in both the exploration for and the exploitation of oil and gas worldwide.

  19. Ice Storm Supercomputer

    ScienceCinema

    None

    2013-05-28

    "A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed 'Ice Storm,' this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen." For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  20. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2013-04-19

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing?everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  1. Predicting Hurricanes with Supercomputers

    SciTech Connect

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  2. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  3. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1992-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  4. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1991-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  5. Overview of the I-way : wide area visual supercomputing.

    SciTech Connect

    DeFanti, T. A.; Foster, I.; Papka, M. E.; Stevens, R.; Kuhfuss, T.; Univ. of Illinois at Chicago

    1996-01-01

    This paper discusses the I-WAY project and provides an overview of the papers in this issue of IJSA. The I-WAY is an experimental environment for building distributed virtual reality applications and for exploring issues of distributed wide area resource management and scheduling. The goal of the I-WAY project is to enable researchers use multiple internetworked supercomputers and advanced visualization systems to conduct very large-scale computations. By connecting a dozen ATM testbeds, seventeen supercomputer centers, five virtual reality research sites, and over sixty applications groups, the I-WAY project has created an extremely diverse wide area environment for exploring advanced applications. This environment has provided a glimpse of the future for advanced scientific and engineering computing. 1 A Model for Distributed Collaborative Computing The I-WAY, or Information Wide Area Year, was a year-long effort to link existing national testbeds based on ATM (asynchronous transfer mode) to interconnect supercomputer centers, virtual reality (VR) research locations, and applications development sites. The I-WAY was successfully demonstrated at Supercomputing '95 and included over sixty distributed supercomputing applications that used a variety of supercomputing resources and VR display.

  6. AICD -- Advanced Industrial Concepts Division Biological and Chemical Technologies Research Program. 1993 Annual summary report

    SciTech Connect

    Petersen, G.; Bair, K.; Ross, J.

    1994-03-01

    The annual summary report presents the fiscal year (FY) 1993 research activities and accomplishments for the United States Department of Energy (DOE) Biological and Chemical Technologies Research (BCTR) Program of the Advanced Industrial Concepts Division (AICD). This AICD program resides within the Office of Industrial Technologies (OIT) of the Office of Energy Efficiency and Renewable Energy (EE). The annual summary report for 1993 (ASR 93) contains the following: A program description (including BCTR program mission statement, historical background, relevance, goals and objectives), program structure and organization, selected technical and programmatic highlights for 1993, detailed descriptions of individual projects, a listing of program output, including a bibliography of published work, patents, and awards arising from work supported by BCTR.

  7. Distributed user services for supercomputers

    NASA Technical Reports Server (NTRS)

    Sowizral, Henry A.

    1989-01-01

    User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.

  8. Supercomputer Environments for Science Applications

    NASA Astrophysics Data System (ADS)

    McNamara, Brendan

    1987-11-01

    Current implementations of Roberts' rules for programming for supercomputers are reviewed. The coming Class VII computers will require more powerful software tools to realize their full potential, with more use of knowledge systems, symbolic manipulation, automated programming, parallel processing, parallelized graphics, and an integration of high-performance work stations into the supercomputing environment. The need for these approaches is illustrated with simple examples.

  9. Enabling department-scale supercomputing

    SciTech Connect

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  10. Supercomputing Sheds Light on the Dark Universe

    SciTech Connect

    Salman Habib

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named a finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.

  11. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  12. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  13. Sandia`s network for Supercomputer `96: Linking supercomputers in a wide area Asynchronous Transfer Mode (ATM) network

    SciTech Connect

    Pratt, T.J.; Martinez, L.G.; Vahle, M.O.

    1997-04-01

    The advanced networking department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past several years as a forum to demonstrate and focus communication and networking developments. At Supercomputing 96, for the first time, Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory combined their Supercomputing 96 activities within a single research booth under the ASO banner. Sandia provided the network design and coordinated the networking activities within the booth. At Supercomputing 96, Sandia elected: to demonstrate wide area network connected Massively Parallel Processors, to demonstrate the functionality and capability of Sandia`s new edge architecture, to demonstrate inter-continental collaboration tools, and to demonstrate ATM video capabilities. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia`s overall strategies in ATM networking.

  14. Quantum Hamiltonian Physics with Supercomputers

    NASA Astrophysics Data System (ADS)

    Vary, James P.

    2014-06-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark-gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  15. Supercomputer debugging workshop 1991 proceedings

    SciTech Connect

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  16. Supercomputer debugging workshop 1991 proceedings

    SciTech Connect

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  17. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  18. Supercomputing research networks

    NASA Astrophysics Data System (ADS)

    Mulhall, Marguerite

    The need for a federal high-performance computing network has been presented before several congressional committees over the last several weeks. Championing the cause is Senator Albert Gore, Jr.(D-TN), Chairman of the Senate Subcommittee on Science, Space and Technology. Gore has been trumpeting the creation of computer “superhighways,” which would link supercomputers at universities, national laboratories and in industry and have the capability of transmitting 1000 times more data per second than current networks. The National High-Performance Computer Technology Act (S.1067) was introduced by Gore in May 1989 and authorizes $1.75 billion over 5 years for the program.The proposed program would involve the National Science Foundation, National Aeronautics and Space Administration, Department of Energy and Department of Defense. Other agencies such as National Oceanic and Atmospheric Administration, the U.S. Geological Survey, National Institutes of Health and Library of Congress would also play important roles. Gore's bill authorizes specific funds for the NSF in this area; however, to gain authorization for the program for some of the other agencies' “companion” bills must be introduced.

  19. Storage needs in future supercomputer environments

    NASA Technical Reports Server (NTRS)

    Coleman, Sam

    1992-01-01

    The Lawrence Livermore National Laboratory (LLNL) is a Department of Energy contractor, managed by the University of California since 1952. Major projects at the Laboratory include the Strategic Defense Initiative, nuclear weapon design, magnetic and laser fusion, laser isotope separation, and weather modeling. The Laboratory employs about 8,000 people. There are two major computer centers: The Livermore Computer Center and the National Energy Research Supercomputer Center. As we increase the computing capacity of LLNL systems and develop new applications, the need for archival capacity will increase. Rather than quantify that increase, I will discuss the hardware and software architectures that we will need to support advanced applications.

  20. Role of supercomputers in magnetic fusion and energy research programs

    SciTech Connect

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained.

  1. Supercomputers in mechanical systems research

    NASA Astrophysics Data System (ADS)

    Soni, A. H.

    The state of the art in supercomputers is examined vis a vis mechanical systems research. A list of 40 Class VI supercomputers is given, with sites, purposes and computer type specified; purposes include weapons research, reactor research, military, atmospheric science, aerodynamics, oceanography, engineering research, geophysics, petroleum engineering, and jet engine simulation. The availability of such machines has motivated scientists and engineers to explore and formulate new research problems previously considered completely intractable. Six problems in the area of robot mechanisms, suitable for research with supercomputers, are examined: generalized Burmester theory in space, analysis and synthesis of a general 6-R, dimensional synthesis of mechanisms, generation of new design concepts, kineto-elasto dynamic synthesis of mechanical systems, and dynamic response analysis of mechanical systems with emphasis on design.

  2. Mira: Argonne's 10-petaflops supercomputer

    SciTech Connect

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  3. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2014-06-05

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  4. Radioactive waste shipments to Hanford retrievable storage from Westinghouse Advanced Reactors and Nuclear Fuels Divisions, Cheswick, Pennsylvania

    SciTech Connect

    Duncan, D.; Pottmeyer, J.A.; Weyns, M.I.; Dicenso, K.D.; DeLorenzo, D.S.

    1994-04-01

    During the next two decades the transuranic (TRU) waste now stored in the burial trenches and storage facilities at the Hanford Sits in southeastern Washington State is to be retrieved, processed at the Waste Receiving and Processing Facility, and shipped to the Waste Isolation Pilot Plant (WIPP), near Carlsbad, New Mexico for final disposal. Approximately 5.7 percent of the TRU waste to be retrieved for shipment to WIPP was generated by the decontamination and decommissioning (D&D) of the Westinghouse Advanced Reactors Division (WARD) and the Westinghouse Nuclear Fuels Division (WNFD) in Cheswick, Pennsylvania and shipped to the Hanford Sits for storage. This report characterizes these radioactive solid wastes using process knowledge, existing records, and oral history interviews.

  5. Computational plasma physics and supercomputers

    SciTech Connect

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics.

  6. Multi-petascale highly efficient parallel supercomputer

    SciTech Connect

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  7. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  8. Academic and Career Advancement for Black Male Athletes at NCAA Division I Institutions

    ERIC Educational Resources Information Center

    Baker, Ashley R.; Hawkins, Billy J.

    2016-01-01

    This chapter examines the structural arrangements and challenges many Black male athletes encounter as a result of their use of sport for upward social mobility. Recommendations to enhance their preparation and advancement are provided.

  9. TOP500 Supercomputers for June 2004

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  10. TOP500 Supercomputers for June 2005

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  11. 11th Annual NIH Pain Consortium Symposium on Advances in Pain Research | Division of Cancer Prevention

    Cancer.gov

    The NIH Pain Consortium will convene the 11th Annual NIH Pain Consortium Symposium on Advances in Pain Research, featuring keynote speakers and expert panel sessions on Innovative Models and Methods. The first keynote address will be delivered by David J. Clark, MD, PhD, Stanford University entitled “Challenges of Translational Pain Research: What Makes a Good Model?” |

  12. Building Black Holes: Supercomputer Cinema

    NASA Astrophysics Data System (ADS)

    Shapiro, Stuart L.; Teukolsky, Saul A.

    1988-07-01

    A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.

  13. Film shape calculations on supercomputers

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.

    1982-01-01

    Both scalar and vector operations are described to demonstrate usefulness of supercomputers (computers with peak computing speeds exceeding 100 million operative per second) in solving tribological problems. A simple kernel of the film shape calculations in an elastohydrodynamic lubricated rectangular contact is presented and the relevant equations are described. Both scalar and vector versions of the film shape code are presented. The run times of the two types of code indicate that over a 50-to-1 speedup of scalar to vector computational time for vector lengths typically used in elastohydrodynamic lubrication analysis is obtained.

  14. A UNIX interface to supercomputers

    SciTech Connect

    McBryan, O.A.

    1985-01-01

    We describe a convenient interface between UNIX-based work-stations or minicomputers, and supercomputers such as the CRAY series machines. Using this interface, the user can issue commands entirely on the UNIX system, with remote compilation, loading and execution performed on the supercomputer. The interface is not a remote login interface. Rather the domain of various UNIX utilities such as compilers, archivers and loaders are extended to include the CRAY. The user need know essentially nothing about the CRAY operating system, commands or filename restrictions. Standard UNIX utilities will perform CRAY operations transparently. UNIX command names and arguments are mapped to corresponding CRAY equivalents, suitable options are selected as needed, UNIX directory tree filenames are coerced to allowable CRAY names and all source and output files are automatically transferred between the machines. The primary purpose of the software is to allow the programmer to benefit from the interactive features of UNIX systems including screen editors, software maintenance utilities such as make and SCCS and in general to avail of the large set of UNIX text manipulation features. The interface was designed particularly to support development of very large multi-file programs, possibly consisting of hundreds of files and hundreds of thousands of lines of code. All CRAY source is kept on the work-station. We have found that using the software, the complete program development phase for a large CRAY application may be performed entirely on a work-station.

  15. Advanced Spatial-Division Multiplexed Measurement Systems Propositions-From Telecommunication to Sensing Applications: A Review.

    PubMed

    Weng, Yi; Ip, Ezra; Pan, Zhongqi; Wang, Ting

    2016-01-01

    The concepts of spatial-division multiplexing (SDM) technology were first proposed in the telecommunications industry as an indispensable solution to reduce the cost-per-bit of optical fiber transmission. Recently, such spatial channels and modes have been applied in optical sensing applications where the returned echo is analyzed for the collection of essential environmental information. The key advantages of implementing SDM techniques in optical measurement systems include the multi-parameter discriminative capability and accuracy improvement. In this paper, to help readers without a telecommunication background better understand how the SDM-based sensing systems can be incorporated, the crucial components of SDM techniques, such as laser beam shaping, mode generation and conversion, multimode or multicore elements using special fibers and multiplexers are introduced, along with the recent developments in SDM amplifiers, opto-electronic sources and detection units of sensing systems. The examples of SDM-based sensing systems not only include Brillouin optical time-domain reflectometry or Brillouin optical time-domain analysis (BOTDR/BOTDA) using few-mode fibers (FMF) and the multicore fiber (MCF) based integrated fiber Bragg grating (FBG) sensors, but also involve the widely used components with their whole information used in the full multimode constructions, such as the whispering gallery modes for fiber profiling and chemical species measurements, the screw/twisted modes for examining water quality, as well as the optical beam shaping to improve cantilever deflection measurements. Besides, the various applications of SDM sensors, the cost efficiency issue, as well as how these complex mode multiplexing techniques might improve the standard fiber-optic sensor approaches using single-mode fibers (SMF) and photonic crystal fibers (PCF) have also been summarized. Finally, we conclude with a prospective outlook for the opportunities and challenges of SDM

  16. Misleading performance in the supercomputing field

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1992-01-01

    The problems of misleading performance reporting and the evident lack of careful refereeing in the supercomputing field are discussed in detail. Included are some examples that have appeared in recently published scientific papers. Some guidelines for reporting performance are presented the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  17. Supercomputer networking for space science applications

    NASA Technical Reports Server (NTRS)

    Edelson, B. I.

    1992-01-01

    The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.

  18. Applications of parallel supercomputers: Scientific results and computer science lessons

    SciTech Connect

    Fox, G.C.

    1989-07-12

    Parallel Computing has come of age with several commercial and inhouse systems that deliver supercomputer performance. We illustrate this with several major computations completed or underway at Caltech on hypercubes, transputer arrays and the SIMD Connection Machine CM-2 and AMT DAP. Applications covered are lattice gauge theory, computational fluid dynamics, subatomic string dynamics, statistical and condensed matter physics,theoretical and experimental astronomy, quantum chemistry, plasma physics, grain dynamics, computer chess, graphics ray tracing, and Kalman filters. We use these applications to compare the performance of several advanced architecture computers including the conventional CRAY and ETA-10 supercomputers. We describe which problems are suitable for which computers in the terms of a matching between problem and computer architecture. This is part of a set of lessons we draw for hardware, software, and performance. We speculate on the emergence of new academic disciplines motivated by the growing importance of computers. 138 refs., 23 figs., 10 tabs.

  19. Supercomputing 2002: NAS Demo Abstracts

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor)

    2002-01-01

    The hyperwall is a new concept in visual supercomputing, conceived and developed by the NAS Exploratory Computing Group. The hyperwall will allow simultaneous and coordinated visualization and interaction of an array of processes, such as a the computations of a parameter study or the parallel evolutions of a genetic algorithm population. Making over 65 million pixels available to the user, the hyperwall will enable and elicit qualitatively new ways of leveraging computers to accomplish science. It is currently still unclear whether we will be able to transport the hyperwall to SC02. The crucial display frame still has not been completed by the metal fabrication shop, although they promised an August delivery. Also, we are still working the fragile node issue, which may require transplantation of the compute nodes from the present 2U cases into 3U cases. This modification will increase the present 3-rack configuration to 5 racks.

  20. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  1. Spack: the Supercomputing Package Manager

    Energy Science and Technology Software Center (ESTSC)

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstaclemore » deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.« less

  2. Spack: the Supercomputing Package Manager

    SciTech Connect

    Gamblin, T.

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstacle deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.

  3. TOP500 Supercomputers for November 2003

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  4. A guidebook to Fortran on supercomputers

    SciTech Connect

    Levesque, J.M.; Williamson, J.W.

    1988-01-01

    This book, explains in detail both the underlying architecture of today's supercomputers and the manner by which a compiler maps Fortran code onto that architecture. Most important, the constructs preventing full optimizations are outlined, and specific strategies for restructuring a program are provided. Based on the author's actual experience restructuring existing programs for particular supercomputers, the book generally follows the format of a series of supercomputer seminars that they regularly present on a worldwide basis. All examples are explained with actual Fortran code; no mathematical abstractions such as dataflow graphs are used.

  5. Proceedings of the first energy research power supercomputer users symposium

    SciTech Connect

    Not Available

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  6. The Pawsey Supercomputer geothermal cooling project

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  7. Advances in astronomy (Scientific session of the Physical Sciences Division of the Russian Academy of Sciences, 27 February 2013)

    NASA Astrophysics Data System (ADS)

    2013-07-01

    A scientific session of the Division of Physical Sciences of the Russian Academy of Sciences (RAS), entitled "Advances in Astronomy" was held on 27 February 2013 at the conference hall of the Lebedev Physical Institute, RAS. The following reports were put on the session agenda posted on the website http://www.gpad.ac.ru of the RAS Physical Sciences Division: (1) Chernin A D (Sternberg Astronomical Institute, Moscow State University, Moscow) "Dark energy in the local Universe: HST data, nonlinear theory, and computer simulations"; (2) Gnedin Yu N (Main (Pulkovo) Astronomical Observatory, RAS, St. Petersburg) "A new method of supermassive black hole studies based on polarimetric observations of active galactic nuclei"; (3) Efremov Yu N (Sternberg Astronomical Institute, Moscow State University, Moscow) "Our Galaxy: grand design and moderately active nucleus"; (4) Gilfanov M R (Space Research Institute, RAS, Moscow) "X-ray binaries, star formation, and type-Ia supernova progenitors"; (5) Balega Yu Yu (Special Astrophysical Observatory, RAS, Nizhnii Arkhyz, Karachaevo-Cherkessia Republic) "The nearest 'star factory' in the Orion Nebula"; (6) Bisikalo D V (Institute of Astronomy, RAS, Moscow) "Atmospheres of giant exoplanets"; (7) Korablev O I (Space Research Institute, RAS, Moscow) "Spectroscopy of the atmospheres of Venus and Mars: new methods and new results"; (8) Ipatov A V (Institute of Applied Astronomy, RAS, St. Petersburg) "A new-generation radio interferometer for fundamental and applied research". Summaries of the papers based on reports 1, 2, 4, 7, 8 are given below. • Dark energy in the nearby Universe: HST data, nonlinear theory, and computer simulations, A D Chernin Physics-Uspekhi, 2013, Volume 56, Number 7, Pages 704-709 • Investigating supermassive black holes: a new method based on the polarimetric observations of active galactic nuclei, Yu N Gnedin Physics-Uspekhi, 2013, Volume 56, Number 7, Pages 709-714 • X-ray binaries and star formation, M R

  8. Simulating performance sensitivity of supercomputer job parameters.

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on the use of a supercomputer simulation to study the performance sensitivity to systematic changes in the job parameters of run time, number of CPUs, and interarrival time. We also examine the effect of changes in share allocation and service ratio for job prioritization under a Fair Share queuing Algorithm to see the effect on facility figures of merit. We used log data from the ASCI supercomputer Blue Mountain and the ASCI simulator BIRMinator to perform this study. The key finding is that the performance of the supercomputer is quite sensitive to all the job parameters with the interarrival rate of the jobs being most sensitive at the highest rates and increasing run times the least sensitive job parameter with respect to utilization and rapid turnaround. We also find that this facility is running near its maximum practical utilization. Finally, we show the importance of the use of simulation in understanding the performance sensitivity of a supercomputer.

  9. Graphics Flip Cube for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Gong, Chris; Reid, Lisa (Technical Monitor)

    1998-01-01

    Flip cube (constructed of heavy plastic) displays 11 graphics representing current projects or demos from 5 NASA centers participating in Supercomputing '98 (SC98). Included with the images are the URLS and names of the NASA centers.

  10. Introducing Mira, Argonne's Next-Generation Supercomputer

    SciTech Connect

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  11. TOP500 Supercomputers for November 2004

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  12. Misleading Performance Reporting in the Supercomputing Field

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Kutler, Paul (Technical Monitor)

    1992-01-01

    In a previous humorous note, I outlined twelve ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  13. Taking ASCI supercomputing to the end game.

    SciTech Connect

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  14. SUSTAINABLE TECHNOLOGY DIVISION - HOME PAGE

    EPA Science Inventory

    The mission of the Sustainable Technology Division is to advance the scientific understanding, development and application of technologies and methods for prevention, removal and control of environmental risks to human health and ecology. The Division is organized into four bra...

  15. Supercomputers and the mathematical modeling of high complexity problems

    NASA Astrophysics Data System (ADS)

    Belotserkovskii, Oleg M.

    2010-12-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  16. Use of Convex supercomputers for flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1992-01-01

    The use of the Convex Computer Corporation supercomputers for flight simulation is discussed focusing on a real-time input/output system for supporting the flight simulation. The flight simulation computing system is based on two single processor control data corporation CYBER 175 computers, coupled through extended memory. The Advanced Real-Time Simulation System for digital data distribution and signal conversion is a state-of-the-art, high-speed fiber-optic-based, ring network system which is based on the computer automated measurement and control technology.

  17. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    SciTech Connect

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  18. Pipelining and dataflow techniques for designing supercomputers

    SciTech Connect

    Su, S.P.

    1982-01-01

    Extensive research has been conducted over the last two decades in developing supercomputers to meet the demand of high computational performance. This thesis investigates some pipelining and dataflow techniques for designing supercomputers. In the pipelining area, new techniques are developed for scheduling vector instructions in a multi-pipeline supercomputer and for constructing VLSI matrix arithmetic pipelines for large-scale matrix computations. In the dataflow area, a new approach is proposed to dispatch high-level functions for dependence-driven computations. A parallel task scheduling model is proposed for multi-pipeline vector supercomputers. This model can be applied to explore maximal concurrencies in vector supercomputers with a structure generalized from the CRAY-1, CYBER-205, and TI-ASC. The optimization problem of simultaneously scheduling multiple pipelines is proved to be MP-complete. Thus, heuristic scheduling algorithms for some restricted classes of vector task systems are developed. Nearly optimal performance can be achieved with the proposed parallel pipeline scheduling method. Simulation results on randomly generated task systems are presented to verify the analytical performance bounds. For dependence-driven computations, a dataflow controller is used to perform run-time scheduling of compound functions. The scheduling problem is shown to be NP-complete. Several heuristic scheduling strategies are proposed based on the time and resource demands of compound functions.

  19. Characterizing output bottlenecks in a supercomputer

    SciTech Connect

    Xie, Bing; Chase, Jeffrey; Dillow, David A; Drokin, Oleg; Klasky, Scott A; Oral, H Sarp; Podhorszki, Norbert

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic, contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.

  20. TOP500 Supercomputers for November 2002

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  1. TOP500 Supercomputers for June 2003

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  2. TOP500 Supercomputers for June 2002

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  3. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  4. Intelligent supercomputers: the Japanese computer sputnik

    SciTech Connect

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  5. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  6. Computational plasma physics and supercomputers. Revision 1

    SciTech Connect

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models.

  7. Roadrunner Supercomputer Breaks the Petaflop Barrier

    ScienceCinema

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2010-01-08

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  8. Fitting the Universe on a Supercomputer

    NASA Astrophysics Data System (ADS)

    White, Simon D. M.; Springel, Volker

    1999-03-01

    Simulations run on the largest available parallel supercomputers are answering the question of how today's rich cosmic structure developed from a smooth, near featureless early universe. Two such simulations illustrate how algorithm and implementation strategies can be optimized for specific cosmological problems.

  9. Roadrunner Supercomputer Breaks the Petaflop Barrier

    SciTech Connect

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2008-06-09

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  10. Solidification in a Supercomputer: From Crystal Nuclei to Dendrite Assemblages

    NASA Astrophysics Data System (ADS)

    Shibuta, Yasushi; Ohno, Munekazu; Takaki, Tomohiro

    2015-08-01

    Thanks to the recent progress in high-performance computational environments, the range of applications of computational metallurgy is expanding rapidly. In this paper, cutting-edge simulations of solidification from atomic to microstructural levels performed on a graphics processing unit (GPU) architecture are introduced with a brief introduction to advances in computational studies on solidification. In particular, million-atom molecular dynamics simulations captured the spontaneous evolution of anisotropy in a solid nucleus in an undercooled melt and homogeneous nucleation without any inducing factor, which is followed by grain growth. At the microstructural level, the quantitative phase-field model has been gaining importance as a powerful tool for predicting solidification microstructures. In this paper, the convergence behavior of simulation results obtained with this model is discussed, in detail. Such convergence ensures the reliability of results of phase-field simulations. Using the quantitative phase-field model, the competitive growth of dendrite assemblages during the directional solidification of a binary alloy bicrystal at the millimeter scale is examined by performing two- and three-dimensional large-scale simulations by multi-GPU computation on the supercomputer, TSUBAME2.5. This cutting-edge approach using a GPU supercomputer is opening a new phase in computational metallurgy.

  11. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  12. Travel from a supercomputer to killer micros

    SciTech Connect

    Werner, N.E.

    1991-03-01

    I describe my effort to convert a Fortran application that runs on a parallel supercomputer (Cray Y/MP) to run on a set of BBN TC2000 killer micros. I used both shared memory parallel processing options available at MPCI for the BBN TC2000, the Parallel Fortran Preprocessor (PFP) and the Uniform System extended Fortran compiler (US). I describe how I used the BBN Xtra programming tools for analysis and debugging during this conversion process. My ultimate goal for this hands on experiment was to gain insight into the type of tools that might be helpful for porting existing programs from a supercomputer environment to a killer micro environment. 5 refs., 9 figs.

  13. Adventures in Supercomputing: An innovative program

    SciTech Connect

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  14. Advanced noise reduction techniques for ultra-low phase noise optical-to-microwave division with femtosecond fiber combs.

    PubMed

    Zhang, Wei; Xu, Zhenyu; Lours, Michel; Boudot, Rodolphe; Kersalé, Yann; Luiten, Andre N; Le Coq, Yann; Santarelli, Giorgio

    2011-05-01

    We report what we believe to be the lowest phase noise optical-to-microwave frequency division using fiber-based femtosecond optical frequency combs: a residual phase noise of -120 dBc/Hz at 1 Hz offset from an 11.55 GHz carrier frequency. Furthermore, we report a detailed investigation into the fundamental noise sources which affect the division process itself. Two frequency combs with quasi-identical configurations are referenced to a common ultrastable cavity laser source. To identify each of the limiting effects, we implement an ultra-low noise carrier-suppression measurement system, which avoids the detection and amplification noise of more conventional techniques. This technique suppresses these unwanted sources of noise to very low levels. In the Fourier frequency range of ∼200 Hz to 100 kHz, a feed-forward technique based on a voltage-controlled phase shifter delivers a further noise reduction of 10 dB. For lower Fourier frequencies, optical power stabilization is implemented to reduce the relative intensity noise which causes unwanted phase noise through power-to-phase conversion in the detector. We implement and compare two possible control schemes based on an acousto-optical modulator and comb pump current. We also present wideband measurements of the relative intensity noise of the fiber comb. PMID:21622045

  15. Mantle convection on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Weismüller, Jens; Gmeiner, Björn; Mohr, Marcus; Waluga, Christian; Wohlmuth, Barbara; Rüde, Ulrich; Bunge, Hans-Peter

    2015-04-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures demand an interdisciplinary co-design. Here we report about recent advances of the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups in computer sciences, mathematics and geophysical application under the leadership of FAU Erlangen. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection assessing the impact of small scale processes on global mantle flow.

  16. Molecular simulation of rheological properties using massively parallel supercomputers

    SciTech Connect

    Bhupathiraju, R.K.; Cui, S.T.; Gupta, S.A.; Cummings, P.T.; Cochran, H.D.

    1996-11-01

    Advances in parallel supercomputing now make possible molecular-based engineering and science calculations that will soon revolutionize many technologies, such as those involving polymers and those involving aqueous electrolytes. We have developed a suite of message-passing codes for classical molecular simulation of such complex fluids and amorphous materials and have completed a number of demonstration calculations of problems of scientific and technological importance with each. In this paper, we will focus on the molecular simulation of rheological properties, particularly viscosity, of simple and complex fluids using parallel implementations of non-equilibrium molecular dynamics. Such calculations represent significant challenges computationally because, in order to reduce the thermal noise in the calculated properties within acceptable limits, large systems and/or long simulated times are required.

  17. Enabling technologies for web-based ubiquitous supercomputing

    SciTech Connect

    Foster, I.; Tuecke, S.

    1996-12-31

    We use the term ubiquitous supercomputing to refer to systems that integrate low- and mid-range computing systems, advanced networks, and remote high-end computers with the goal of enhancing the computational power accessible from local environments. Such systems promise to enable new applications in areas as diverse as smart instruments and collaborative environments. However, they also demand tools for transporting code between computers and for establishing flexible, dynamic communication structures. In this paper, we propose that these requirements be satisfied by enhancing the Java programming language with global pointer and remote service request mechanisms from a communication library called Nexus. Java supports transportable code; Nexus provides communication support. We explain how this Nexus Java library is implemented and illustrate its use with examples.

  18. Post-remedial-action radiological survey of the Westinghouse Advanced Reactors Division Plutonium Fuel Laboratories, Cheswick, Pennsylvania, October 1-8, 1981

    SciTech Connect

    Flynn, K.F.; Justus, A.L.; Sholeen, C.M.; Smith, W.H.; Wynveen, R.A.

    1984-01-01

    The post-remedial-action radiological assessment conducted by the ANL Radiological Survey Group in October 1981, following decommissioning and decontamination efforts by Westinghouse personnel, indicated that except for the Advanced Fuels Laboratory exhaust ductwork and north wall, the interior surfaces of the Plutonium Laboratory and associated areas within Building 7 and the Advanced Fuels Laboratory within Building 8 were below both the ANSI Draft Standard N13.12 and NRC Guideline criteria for acceptable surface contamination levels. Hence, with the exceptions noted above, the interior surfaces of those areas within Buildings 7 and 8 that were included in the assessment are suitable for unrestricted use. Air samples collected at the involved areas within Buildings 7 and 8 indicated that the radon, thoron, and progeny concentrations within the air were well below the limits prescribed by the US Surgeon General, the Environmental Protection Agency, and the Department of Energy. The Building 7 drain lines are contaminated with uranium, plutonium, and americium. Radiochemical analysis of water and dirt/sludge samples collected from accessible Low-Bay, High-Bay, Shower Room, and Sodium laboratory drains revealed uranium, plutonium, and americium contaminants. The Building 7 drain lines hence are unsuitable for release for unrestricted use in their present condition. Low levels of enriched uranium, plutonium, and americium were detected in an environmental soil coring near Building 8, indicating release or spillage due to Advanced Reactors Division activities or Nuclear Fuel Division activities undr NRC licensure. /sup 60/Co contamination was detected within the Building 7 Shower Room and in soil corings from the environs of Building 7. All other radionuclide concentrations measured in soil corings and the storm sewer outfall sample collected from the environs about Buildings 7 and 8 were within the range of normally expected background concentrations.

  19. Radiation transport algorithms on trans-petaflops supercomputers of different architectures.

    SciTech Connect

    Christopher, Thomas Woods

    2003-08-01

    We seek to understand which supercomputer architecture will be best for supercomputers at the Petaflops scale and beyond. The process we use is to predict the cost and performance of several leading architectures at various years in the future. The basis for predicting the future is an expanded version of Moore's Law called the International Technology Roadmap for Semiconductors (ITRS). We abstract leading supercomputer architectures into chips connected by wires, where the chips and wires have electrical parameters predicted by the ITRS. We then compute the cost of a supercomputer system and the run time on a key problem of interest to the DOE (radiation transport). These calculations are parameterized by the time into the future and the technology expected to be available at that point. We find the new advanced architectures have substantial performance advantages but conventional designs are likely to be less expensive (due to economies of scale). We do not find a universal ''winner'', but instead the right architectural choice is likely to involve non-technical factors such as the availability of capital and how long people are willing to wait for results.

  20. An orthogonal wavelet division multiple-access processor architecture for LTE-advanced wireless/radio-over-fiber systems over heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Mahapatra, Chinmaya; Leung, Victor CM; Stouraitis, Thanos

    2014-12-01

    The increase in internet traffic, number of users, and availability of mobile devices poses a challenge to wireless technologies. In long-term evolution (LTE) advanced system, heterogeneous networks (HetNet) using centralized coordinated multipoint (CoMP) transmitting radio over optical fibers (LTE A-ROF) have provided a feasible way of satisfying user demands. In this paper, an orthogonal wavelet division multiple-access (OWDMA) processor architecture is proposed, which is shown to be better suited to LTE advanced systems as compared to orthogonal frequency division multiple access (OFDMA) as in LTE systems 3GPP rel.8 (3GPP, http://www.3gpp.org/DynaReport/36300.htm). ROF systems are a viable alternative to satisfy large data demands; hence, the performance in ROF systems is also evaluated. To validate the architecture, the circuit is designed and synthesized on a Xilinx vertex-6 field-programmable gate array (FPGA). The synthesis results show that the circuit performs with a clock period as short as 7.036 ns (i.e., a maximum clock frequency of 142.13 MHz) for transform size of 512. A pipelined version of the architecture reduces the power consumption by approximately 89%. We compare our architecture with similar available architectures for resource utilization and timing and provide performance comparison with OFDMA systems for various quality metrics of communication systems. The OWDMA architecture is found to perform better than OFDMA for bit error rate (BER) performance versus signal-to-noise ratio (SNR) in wireless channel as well as ROF media. It also gives higher throughput and mitigates the bad effect of peak-to-average-power ratio (PAPR).

  1. Data-intensive computing on numerically-insensitive supercomputers

    SciTech Connect

    Ahrens, James P; Fasel, Patricia K; Habib, Salman; Heitmann, Katrin; Lo, Li - Ta; Patchett, John M; Williams, Sean J; Woodring, Jonathan L; Wu, Joshua; Hsu, Chung - Hsing

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  2. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    SciTech Connect

    Not Available

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  3. A network architecture for Petaflops supercomputers.

    SciTech Connect

    DeBenedictis, Erik P.

    2003-09-01

    If we are to build a supercomputer with a speed of 10{sup 15} floating operations per second (1 PetaFLOPS), interconnect technology will need to be improved considerably over what it is today. In this report, we explore one possible interconnect design for such a network. The guiding principle in this design is the optimization of all components for the finiteness of the speed of light. To achieve a linear speedup in time over well-tested supercomputers of todays' designs will require scaling up of processor power and bandwidth and scaling down of latency. Latency scaling is the most challenging: it requires a 100 ns user-to-user latency for messages traveling the full diameter of the machine. To meet this constraint requires simultaneously minimizing wire length through 3D packaging, new low-latency electrical signaling mechanisms, extremely fast routers, and new network interfaces. In this report, we outline approaches and implementations that will meet the requirements when implemented as a system. No technology breakthroughs are required.

  4. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  5. Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)

    SciTech Connect

    Not Available

    2011-02-01

    Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.

  6. Adaptation of gasdynamical codes to the modern supercomputers

    NASA Astrophysics Data System (ADS)

    Kaygorodov, P. V.

    2016-02-01

    During last decades, supercomputer architecture has changed significantly and now it is impossible to achieve a peak performance without an adaptation of the numerical codes to modern supercomputer architectures. In this paper, I want to share my experience in adaptation of astrophysics gasdynamical numerical codes to multi-node computing clusters with multi-CPU and multi-GPU nodes.

  7. Plane Wave First-principles Materials Science Codes on Multicore Supercomputer Architectures

    NASA Astrophysics Data System (ADS)

    Canning, Andrew; Deslippe, Jack; Louie, Steven. G.; Scidac Team

    2014-03-01

    Plane wave first-principles codes based on 3D FFTs are one of the largest users of supercomputer cycles in the world. Modern supercomputer architectures are constructed from chips having many CPU cores with nodes containing multiple chips. Designs for future supercomputers are projected to have even more cores per chip. I will present new developments for hybrid MPI/OpenMP PW codes focusing on a specialized 3D FFTs that gives greatly improved scaling over a pure MPI version on multicore machines. Scaling results will be presented for the full electronic structure codes PARATEC and BerkeleyGW. using the new hybrid 3D FFTs, threaded libraries and OpenMP to gain greatly improved scaling to very large core count on Cray and IBM machines. Support for this work was provided through the Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (and Basic Energy Sciences).

  8. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    SciTech Connect

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold Edward; Stevenson, Joel O.; Benner, Robert E., Jr.; Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  9. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    SciTech Connect

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  10. Stochastic simulation of electron avalanches on supercomputers

    SciTech Connect

    Rogasinsky, S. V.; Marchenko, M. A.

    2014-12-09

    In the paper, we present a three-dimensional parallel Monte Carlo algorithm named ELSHOW which is developed for simulation of electron avalanches in gases. Parallel implementation of the ELSHOW was made on supercomputers with different architectures (massive parallel and hybrid ones). Using the ELSHOW, calculations of such integral characteristics as the number of particles in an avalanche, the coefficient of impact ionization, the drift velocity, and the others were made. Also, special precise computations were made to select an appropriate size of the time step using the technique of dependent statistical tests. Particularly, the algorithm consists of special methods of distribution modeling, a lexicographic implementation scheme for “branching” of trajectories, justified estimation of functionals. A comparison of the obtained results for nitrogen with previously published theoretical and experimental data was made.

  11. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  12. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  13. Will Your Next Supercomputer Come from Costco?

    SciTech Connect

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way through the procurement process.

  14. High performance cosmological simulations on a grid of supercomputers

    NASA Astrophysics Data System (ADS)

    Groen, D.; Rieder, S.; Portegies Zwart, S. F.

    2012-06-01

    We present results from our cosmological N-body simulation which consisted of 2048x2048x2048 particles and ran distributed across three supercomputers throughout Europe. The run, which was performed as the concluding phase of the Gravitational Billion Body Problem DEISA project, integrated a 30 Mpc box of dark matter using an optimized Tree/Particle Mesh N-body integrator. We ran the simulation up to the present day (z=0), and obtained an efficiency of about 0.93 over 2048 cores compared to a single supercomputer run. In addition, we share our experiences on using multiple supercomputers for high performance computing and provide several recommendations for future projects.

  15. DIVISIBILITY TESTS.

    ERIC Educational Resources Information Center

    FOLEY, JACK L.

    THIS BOOKLET, ONE OF A SERIES, HAS BEEN DEVELOPED FOR THE PROJECT, A PROGRAM FOR MATHEMATICALLY UNDERDEVELOPED PUPILS. A PROJECT TEAM, INCLUDING INSERVICE TEACHERS, IS BEING USED TO WRITE AND DEVELOP THE MATERIALS FOR THIS PROGRAM. THE MATERIALS DEVELOPED IN THIS BOOKLET INCLUDE SUCH CONCEPTS AS (1) DIVISIBILITY TESTS, (2) CHECKING THE FUNDAMENTAL…

  16. Computational physics at the National Energy Research Supercomputer Center

    SciTech Connect

    Mirin, A.A.

    1990-04-01

    The principal roles of the Computational Physics Group are (1) to develop efficient numerical algorithms, programming techniques and applications software for current and future generations of supercomputers, (2) to develop advanced numerical models for the investigation of plasma phenomena and the simulation of contemporary magnetic fusion devices, and (3) to serve as a liaison between the Center and the user community; in particular; to provide NERSC with an application-oriented viewpoint and to provide the user community with expertise on the effective usage of the computers. In addition, many of our computer codes employ state-of the-art algorithms that test the prototypical hardware and software features of the various computers. This document describes the activities of the Computational Physics Group and was prepared with the assistance of the various Group members. The fist part contains overviews on a number of our important projects. The second section lists our important computational models. The third part provides a comprehensive list of our publications.

  17. Developing Fortran Code for Kriging on the Stampede Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  18. Two wavelength division multiplexing WAN trials

    SciTech Connect

    Lennon, W.J.; Thombley, R.L.

    1995-01-20

    Lawrence Livermore National Laboratory, as a super-user, supercomputer, and super-application site, is anticipating the future bandwidth and protocol requirements necessary to connect to other such sites as well as to connect to remote-sited control centers and experiments. In this paper the authors discuss their vision of the future of Wide Area Networking, describe the plans for a wavelength division multiplexed link connecting Livermore with the University of California at Berkeley and describe plans for a transparent, {approx} 10 Gb/s ring around San Francisco Bay.

  19. Towards Efficient Supercomputing: Searching for the Right Efficiency Metric

    SciTech Connect

    Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W

    2012-01-01

    The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has been and how it stands today with respect to efficient supercomputing.

  20. UNIX security in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1989-01-01

    The author critiques some security mechanisms in most versions of the Unix operating system and suggests more effective tools that either have working prototypes or have been implemented, for example in secure Unix systems. Although no computer (not even a secure one) is impenetrable, breaking into systems with these alternate mechanisms will cost more, require more skill, and be more easily detected than penetrations of systems without these mechanisms. The mechanisms described fall into four classes (with considerable overlap). User authentication at the local host affirms the identity of the person using the computer. The principle of least privilege dictates that properly authenticated users should have rights precisely sufficient to perform their tasks, and system administration functions should be compartmentalized; to this end, access control lists or capabilities should either replace or augment the default Unix protection system, and mandatory access controls implementing multilevel security models and integrity mechanisms should be available. Since most users access supercomputing environments using networks, the third class of mechanisms augments authentication (where feasible). As no security is perfect, the fourth class of mechanism logs events that may indicate possible security violations; this will allow the reconstruction of a successful penetration (if discovered), or possibly the detection of an attempted penetration.

  1. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  2. Simulating functional magnetic materials on supercomputers.

    PubMed

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram. PMID:21828528

  3. Dense LU Factorization on Multicore Supercomputer Nodes

    SciTech Connect

    Lifflander, Jonathan; Miller, Phil; Venkataraman, Ramprasad; Arya, Anshu; Jones, Terry R; Kale, Laxmikant V

    2012-01-01

    Dense LU factorization is a prominent benchmark used to rank the performance of supercomputers. Many implementations, including the reference code HPL, use block-cyclic distributions of matrix blocks onto a two-dimensional process grid. The process grid dimensions drive a trade-off between communication and computation and are architecture- and implementation-sensitive. We show how the critical panel factorization steps can be made less communication-bound by overlapping asynchronous collectives for pivot identification and exchange with the computation of rank-k updates. By shifting this trade-off, a modified block-cyclic distribution can beneficially exploit more available parallelism on the critical path, and reduce panel factorization's memory hierarchy contention on now-ubiquitous multi-core architectures. The missed parallelism in traditional block-cyclic distributions arises because active panel factorization, triangular solves, and subsequent broadcasts are spread over single process columns or rows (respectively) of the process grid. Increasing one dimension of the process grid decreases the number of distinct processes in the other dimension. To increase parallelism in both dimensions, periodic 'rotation' is applied to the process grid to recover the row-parallelism lost by a tall process grid. During active panel factorization, rank-1 updates stream through memory with minimal reuse. In a column-major process grid, the performance of this access pattern degrades as too many streaming processors contend for access to memory. A block-cyclic mapping in the more popular row-major order does not encounter this problem, but consequently sacrifices node and network locality in the critical pivoting steps. We introduce 'striding' to vary between the two extremes of row- and column-major process grids. As a test-bed for further mapping experiments, we describe a dense LU implementation that allows a block distribution to be defined as a general function of block

  4. Advance Care Planning

    MedlinePlus

    ... Division of Geriatrics and Clinical Gerontology Division of Neuroscience FAQs Funding Opportunities Intramural Research Program Office of ... Is Advance Care Planning? Advance care planning involves learning about the types of decisions that might need ...

  5. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    SciTech Connect

    Geveci, Berk; Fabian, Nathan; Marion, Patrick; Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  6. Structures Division

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1995 are presented.

  7. Signaling to stomatal initiation and cell division

    PubMed Central

    Le, Jie; Zou, Junjie; Yang, Kezhen; Wang, Ming

    2014-01-01

    Stomata are two-celled valves that control epidermal pores whose opening and spacing optimizes shoot-atmosphere gas exchange. Arabidopsis stomatal formation involves at least one asymmetric division and one symmetric division. Stomatal formation and patterning are regulated by the frequency and placement of asymmetric divisions. This model system has already led to significant advances in developmental biology, such as the regulation of cell fate, division, differentiation, and patterning. Over the last 30 years, stomatal development has been found to be controlled by numerous intrinsic genetic and environmental factors. This mini review focuses on the signaling involved in stomatal initiation and in divisions in the cell lineage. PMID:25002867

  8. Signaling to stomatal initiation and cell division.

    PubMed

    Le, Jie; Zou, Junjie; Yang, Kezhen; Wang, Ming

    2014-01-01

    Stomata are two-celled valves that control epidermal pores whose opening and spacing optimizes shoot-atmosphere gas exchange. Arabidopsis stomatal formation involves at least one asymmetric division and one symmetric division. Stomatal formation and patterning are regulated by the frequency and placement of asymmetric divisions. This model system has already led to significant advances in developmental biology, such as the regulation of cell fate, division, differentiation, and patterning. Over the last 30 years, stomatal development has been found to be controlled by numerous intrinsic genetic and environmental factors. This mini review focuses on the signaling involved in stomatal initiation and in divisions in the cell lineage. PMID:25002867

  9. An integrated distributed processing interface for supercomputers and workstations

    SciTech Connect

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  10. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    SciTech Connect

    Berkbigler, K. P.; Bush, B. W.; Davis, Kei,; Hoisie, A.; Smith, S. A.

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  11. Chemical Technology Division annual technical report 1997

    SciTech Connect

    1998-06-01

    The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials and electrified interfaces. In addition, the Division operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division`s activities during 1997 are presented.

  12. The Sky's the Limit When Super Students Meet Supercomputers.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  13. Configurating a supercomputer for an interactive scientific workload

    SciTech Connect

    Anderson, W.; Brice, R.; Alexander, W.

    1982-01-01

    A detailed, validated simulation model of an existing Cray-1 running under an interactive operating system was used to investigate configurations of a new supercomputer recently announced by the same vendor. The goal was to determine the optimum configuration for a known interactive scientific workload. Questions considered included how much main memory would be needed and whether to acquire an optional fast swapping device.

  14. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  15. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  16. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    NASA Astrophysics Data System (ADS)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  17. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  18. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  19. 1998 Chemical Technology Division Annual Technical Report.

    SciTech Connect

    Ackerman, J.P.; Einziger, R.E.; Gay, E.C.; Green, D.W.; Miller, J.F.

    1999-08-06

    The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials. In addition, the Division operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division's activities during 1998 are presented.

  20. New Mexico High School supercomputer challenge

    SciTech Connect

    Cohen, M.; Foster, M.; Kratzer, D.; Malone, P.; Solem, A.

    1991-01-01

    The national need for well trained scientists and engineers is more urgent today than ever before. Scientists who are trained in advanced computational techniques and have experience with multidisciplinary scientific collaboration are needed for both research and commercial applications if the United States is to maintain its productivity and technical edge in the world market. Many capable high school students, however, lose interest in pursuing scientific academic subjects or in considering science or engineering as a possible career. An academic contest that progresses from a state-sponsored program to a national competition is a way of developing science and computing knowledge among high school students and teachers as well as instilling enthusiasm for science. This paper describes an academic-year long program for high school students in New Mexico. The unique features, method, and evaluation of the program are discussed.

  1. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  2. Accelerated modeling and simulation with a desktop supercomputer

    NASA Astrophysics Data System (ADS)

    Kelmelis, Eric J.; Humphrey, John R.; Durbano, James P.; Ortiz, Fernando E.

    2006-05-01

    The performance of modeling and simulation tools is inherently tied to the platform on which they are implemented. In most cases, this platform is a microprocessor, either in a desktop PC, PC cluster, or supercomputer. Microprocessors are used because of their familiarity to developers, not necessarily their applicability to the problems of interest. We have developed the underlying techniques and technologies to produce supercomputer performance from a standard desktop workstation for modeling and simulation applications. This is accomplished through the combined use of graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and standard microprocessors. Each of these platforms has unique strengths and weaknesses but, when used in concert, can rival the computational power of a high-performance computer (HPC). By adding a powerful GPU and our custom designed FPGA card to a commodity desktop PC, we have created simulation tools capable of replacing massive computer clusters with a single workstation. We present this work in its initial embodiment: simulators for electromagnetic wave propagation and interaction. We discuss the trade-offs of each independent technology, GPUs, FPGAs, and microprocessors, and how we efficiently partition algorithms to take advantage of the strengths of each while masking their weaknesses. We conclude by discussing enhancing the computational performance of the underlying desktop supercomputer and extending it to other application areas.

  3. Extracting the Textual and Temporal Structure of Supercomputing Logs

    SciTech Connect

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  4. A color graphics environment in support of supercomputer systems

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, R.

    1985-01-01

    An initial step in the integration of an upgrade of a VPS-32 supercomputer to 16 million 64-bit words, to be closely followed by a further upgrade to 32 million words, was to develop a graphics language commonality with other computers at the Langley Center. The power of the upgraded supercomputer is to users at individual workstations, who will aid in defining the direction for future expansions in both graphics software and workstation requirements for the supercomputers. The LAN used is an ETHERNET configuration featuring both CYBER mainframe and PDP 11/34 image generator computers. The system includes a film recorder for image production in slide, CRT, 16 mm film, 35 mm film or polaroid film images. The workstations have screen resolutions of 1024 x 1024 with each pixel being one of 256 colors selected from a palette of 16 million colors. Each screen can have up to 8 windows open at a time, and is driven by a MC68000 microprocessor drawing on 4.5 Mb RAM, a 40 Mb hard disk and two floppy drives. Input is from a keyboard, digitizer pad, joystick or light pen. The system now allows researchers to view computed results in video time before printing out selected data.

  5. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  6. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  7. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  8. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier

  9. Divisibility--Another Route.

    ERIC Educational Resources Information Center

    Gardella, Francis J.

    1984-01-01

    Given is an alternative to individual divisibility rules by generating a general process that can be applied to establish divisibility by any number. The process relies on modular arithmetic and the concept of congruence. (MNS)

  10. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    NASA Astrophysics Data System (ADS)

    Asif, Rameez

    2016-06-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to ‑11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC).

  11. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    PubMed Central

    Asif, Rameez

    2016-01-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to −11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC). PMID:27270381

  12. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications.

    PubMed

    Asif, Rameez

    2016-01-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to -11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC). PMID:27270381

  13. Supercomputer predictive modeling for ensuring space flight safety

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Smirnov, N. N.; Nikitin, V. F.

    2015-04-01

    Development of new types of rocket engines, as well as upgrading the existing engines needs computer aided design and mathematical tools for supercomputer modeling of all basic processes of mixing, ignition, combustion and outflow through the nozzle. Even small upgrades and changes introduced in existing rocket engines without proper simulations cause severe accidents at launch places witnessed recently. The paper presents the results of computer code developing, verification and validation, making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  14. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  15. Opportunities for leveraging OS virtualization in high-end supercomputing.

    SciTech Connect

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  16. New Mexico Supercomputing Challenge 1993 evaluation report. Progress report

    SciTech Connect

    Trainor, M.; Eker, P.; Kratzer, D.; Foster, M.; Anderson, M.

    1993-11-01

    This report provides the evaluation of the third year (1993) of the New Mexico High School Supercomputing Challenge. It includes data to determine whether we met the program objectives, measures participation, and compares progress from the first to the third years. This year`s report is a more complete assessment than last year`s, providing both formative and summative evaluation data. Data indicates that the 1993 Challenge significantly changed many students` career plans and attitudes toward science, provided professional development for teachers, and caused some changes in computer offerings in several participating schools.

  17. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  18. Implementation of orthogonal frequency division multiplexing (OFDM) and advanced signal processing for elastic optical networking in accordance with networking and transmission constraints

    NASA Astrophysics Data System (ADS)

    Johnson, Stanley

    An increasing adoption of digital signal processing (DSP) in optical fiber telecommunication has brought to the fore several interesting DSP enabled modulation formats. One such format is orthogonal frequency division multiplexing (OFDM), which has seen great success in wireless and wired RF applications, and is being actively investigated by several research groups for use in optical fiber telecom. In this dissertation, I present three implementations of OFDM for elastic optical networking and distributed network control. The first is a field programmable gate array (FPGA) based real-time implementation of a version of OFDM conventionally known as intensity modulation and direct detection (IMDD) OFDM. I experimentally demonstrate the ability of this transmission system to dynamically adjust bandwidth and modulation format to meet networking constraints in an automated manner. To the best of my knowledge, this is the first real-time software defined networking (SDN) based control of an OFDM system. In the second OFDM implementation, I experimentally demonstrate a novel OFDM transmission scheme that supports both direct detection and coherent detection receivers simultaneously using the same OFDM transmitter. This interchangeable receiver solution enables a trade-off between bit rate and equipment cost in network deployment and upgrades. I show that the proposed transmission scheme can provide a receiver sensitivity improvement of up to 1.73 dB as compared to IMDD OFDM. I also present two novel polarization analyzer based detection schemes, and study their performance using experiment and simulation. In the third implementation, I present an OFDM pilot-tone based scheme for distributed network control. The first instance of an SDN-based OFDM elastic optical network with pilot-tone assisted distributed control is demonstrated. An improvement in spectral efficiency and a fast reconfiguration time of 30 ms have been achieved in this experiment. Finally, I

  19. Chemical Sciences Division annual report 1994

    SciTech Connect

    1995-06-01

    The division is one of ten LBL research divisions. It is composed of individual research groups organized into 5 scientific areas: chemical physics, inorganic/organometallic chemistry, actinide chemistry, atomic physics, and chemical engineering. Studies include structure and reactivity of critical reaction intermediates, transients and dynamics of elementary chemical reactions, and heterogeneous and homogeneous catalysis. Work for others included studies of superconducting properties of high-{Tc} oxides. In FY 1994, the division neared completion of two end-stations and a beamline for the Advanced Light Source, which will be used for combustion and other studies. This document presents summaries of the studies.

  20. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  1. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  2. Optical clock distribution in supercomputers using polyimide-based waveguides

    NASA Astrophysics Data System (ADS)

    Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.

    1999-04-01

    Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  3. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  4. A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery

    NASA Technical Reports Server (NTRS)

    Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.

    2000-01-01

    The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.

  5. Physics division annual report 2000.

    SciTech Connect

    Thayer, K., ed.

    2001-10-04

    impacts the structure of nuclei and extended the exquisite sensitivity of the Atom-Trap-Trace-Analysis technique to new species and applications. All of this progress was built on advances in nuclear theory, which the Division pursues at the quark, hadron, and nuclear collective degrees of freedom levels. These are just a few of the highlights in the Division's research program. The results reflect the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  6. Supercomputer modeling of hydrogen combustion in rocket engines

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Nikitin, V. F.; Altukhov, D. I.; Dushin, V. R.; Koo, Jaye

    2013-08-01

    Hydrogen being an ecological fuel is very attractive now for rocket engines designers. However, peculiarities of hydrogen combustion kinetics, the presence of zones of inverse dependence of reaction rate on pressure, etc. prevents from using hydrogen engines in all stages not being supported by other types of engines, which often brings the ecological gains back to zero from using hydrogen. Computer aided design of new effective and clean hydrogen engines needs mathematical tools for supercomputer modeling of hydrogen-oxygen components mixing and combustion in rocket engines. The paper presents the results of developing verification and validation of mathematical model making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  7. Molecular dynamics simulations of detonation on the roadrunner supercomputer

    NASA Astrophysics Data System (ADS)

    Mniszewski, Susan; Cawkwell, Marc; Germann, Timothy C.

    2012-03-01

    The temporal and spatial scales intrinsic to a real detonating explosive are extremely difficult to capture using molecular dynamics (MD) simulations. Nevertheless, MD remains very attractive since it allows for the resolution of dynamic phenomena at the atomic scale. Large-scale reactive MD simulations in three dimensions require immense computational resources even when simple reactive force fields are employed. We focus on the REBO force field for 'AB' since it has been shown to support a detonation while being simple, analytic, and short-ranged. The transition from two-to three- dimensional simulations is being facilitated by the port of the REBO force field in the parallel MD code SPaSM to LANL's petaflop supercomputer 'Roadrunner'. We provide a detailed discussion of the challenges associated with computing interatomic forces on a hybrid Opteron/Cell BE computational architecture.

  8. An analysis of file migration in a UNIX supercomputing environment

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1992-01-01

    The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.

  9. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  10. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  11. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  12. Division: The Sleeping Dragon

    ERIC Educational Resources Information Center

    Watson, Anne

    2012-01-01

    Of the four mathematical operators, division seems to not sit easily for many learners. Division is often described as "the odd one out". Pupils develop coping strategies that enable them to "get away with it". So, problems, misunderstandings, and misconceptions go unresolved perhaps for a lifetime. Why is this? Is it a case of "out of sight out…

  13. Chemical Technology Division annual technical report, 1996

    SciTech Connect

    1997-06-01

    CMT is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. It conducts R&D in 3 general areas: development of advanced power sources for stationary and transportation applications and for consumer electronics, management of high-level and low-level nuclear wastes and hazardous wastes, and electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, materials chemistry of electrified interfaces and molecular sieves, and the theory of materials properties. It also operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at ANL and other organizations. Technical highlights of the Division`s activities during 1996 are presented.

  14. An inter-realm, cyber-security infrastructure for virtual supercomputing

    SciTech Connect

    Al-Muhtadi, J.; Feng, W. C.; Fisk, M. E.

    2001-01-01

    Virtual supercomputing, (ise ., high-performance grid computing), is poised to revolutionize the way we think about and use computing. However, the security of the links interconnecting the nodes within such an environment will be its Achilles heel, particularly when secure communication is required to tunnel through heterogeneous domains. In this paper we examine existing security mechanisms, show their inadequacy, and design a comprehensive cybersecurity infrastructure that meets the security requirements of virtual supercomputing. Keywords Security, virtual supercomputing, grid computing, high-performance computing, GSS-API, SSL, IPsec, component-based software, dynamic reconfiguration.

  15. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  16. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  17. Physics Division activities report, 1986--1987

    SciTech Connect

    Not Available

    1987-01-01

    This report summarizes the research activities of the Physics Division for the years 1986 and 1987. Areas of research discussed in this paper are: research on e/sup +/e/sup /minus// interactions; research on p/bar p/ interactions; experiment at TRIUMF; double beta decay; high energy astrophysics; interdisciplinary research; and advanced technology development and the SSC.

  18. High energy physics division semiannual report of research activities

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R. )

    1991-08-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1991--June 30, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  19. Chemical Engineering Division Activities

    ERIC Educational Resources Information Center

    Chemical Engineering Education, 1978

    1978-01-01

    The 1978 ASEE Chemical Engineering Division Lecturer was Theodore Vermeulen of the University of California at Berkeley. Other chemical engineers who received awards or special recognition at a recent ASEE annual conference are mentioned. (BB)

  20. Bring Back Short Division.

    ERIC Educational Resources Information Center

    Thornton, Chich

    1985-01-01

    Some benefits of helping learners think in prime numbers are detailed. Reasons for the decay of this ability are described, with short division presented as one activity which should be reintroduced in schools. (MNS)

  1. Requirements for supercomputing in energy research: The transition to massively parallel computing

    SciTech Connect

    Not Available

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  2. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    1999-01-01

    The Structures and Acoustics Division of NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported are a synopsis of the work and accomplishments reported by the Division during the 1996 calendar year. A bibliography containing 42 citations is provided.

  3. Structures and Acoustics Division

    NASA Technical Reports Server (NTRS)

    Acquaviva, Cynthia S.

    2001-01-01

    The Structures and Acoustics Division of the NASA Glenn Research Center is an international leader in rotating structures, mechanical components, fatigue and fracture, and structural aeroacoustics. Included in this report are disciplines related to life prediction and reliability, nondestructive evaluation, and mechanical drive systems. Reported is a synopsis of the work and accomplishments completed by the Division during the 1997, 1998, and 1999 calendar years. A bibliography containing 93 citations is provided.

  4. Supercomputers ready for use as discovery machines for neuroscience.

    PubMed

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998

  5. Supercomputers Ready for Use as Discovery Machines for Neuroscience

    PubMed Central

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998

  6. Adventures in supercomputing: An innovative program for high school teachers

    SciTech Connect

    Oliver, C.E.; Hicks, H.R.; Summers, B.G.; Staten, D.G.

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  7. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  8. From C to Parton Sea: How Supercomputing Reveals Nucleon Structure

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen

    2016-03-01

    Studying the structure of nucleons is not only important to understanding the strong interactions of quarks and gluons, but also to improving the precision of new-physics searches. Since a broad class of experiments, including the LHC and dark-matter detection, require interactions with nucleons, the mission to probe femtoscale physics is also essential for disentangling Standard-Model contributions from potential new physics. These SM backgrounds require parton distribution functions (PDFs) as inputs. However, after decades of experiments and theoretical efforts, there still remain many unknowns, especially in the sea flavor structure and transversely polarized structure. In a discrete spacetime, we can make a direct numerical calculation of the implications of QCD using sufficiently large supercomputing resources. A nonperturbative approach from first principles, lattice QCD, provides hope to expand our understanding of nucleon structure, especially in regions that are difficult to observe in experiments. In this work, we present a first direct calculation of the Bjorken-x dependence of the PDFs using Large-Momentum Effective Theory (LaMET) and comment on the surprising result revealed for the nucleon sea-flavor asymmetry. The work of HWL is supported in part by the M. Hildred Blewett Fellowship of the American Physical Society, www.aps.org.

  9. Numerical infinities and infinitesimals in a new supercomputing framework

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.

    2016-06-01

    Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.

  10. Symbolic Simulation Of Engineering Systems On A Supercomputer

    NASA Astrophysics Data System (ADS)

    Ragheb, Magdi; Gvillo, Dennis; Makowitz, Henry

    1986-03-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a Production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed.

  11. High Resolution Aerospace Applications using the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha

    2005-01-01

    This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

  12. Maintenance and Upgrading of the Richmond Physics Supercomputing Cluster

    NASA Astrophysics Data System (ADS)

    Davda, Vikash

    2003-10-01

    The supercomputing cluster in Physics has been upgraded. It supports nuclear physics research at Jefferson Lab, which focuses on probing the quark-gluon structure of atomic nuclei. We added new slave nodes, increased storage, raised a firewall, and documented the e-mail archive relating to the cluster. The three new slave nodes were physically mounted and configured to join the cluster. A RAID for extra storage was moved from a prototype cluster and configured for this cluster. A firewall was implemented to enhance security using a separate node from the prototype cluster. The software Firewall Builder was used to set communication rules. Documentation consists primarily of e-mails exchanged with the vendor. We wanted web-based, searchable documentation. We used SWISH-E, non-proprietary indexing software designed to search through file collections such as e-mails. SWISH-E works by first creating an index. A built-in module then sets up a Perl interface for the user to define the search; the files in the index are then sorted.

  13. ASC Supercomputers Predict Effects of Aging on Materials

    SciTech Connect

    Kubota, A; Reisman, D B; Wolfer, W G

    2005-08-25

    In an extensive molecular dynamics (MD) study of shock compression of aluminum containing such microscopic defects as found in aged plutonium, LLNL scientists have demonstrated that ASC supercomputers live up to their promise as powerful tools to predict aging phenomena in the nuclear stockpile. Although these MD investigations are carried out on material samples containing only about 10 to 40 million atoms, and being not much bigger than a virus particle, they have shown that reliable materials properties and relationships between them can be extracted for density, temperature, pressure, and dynamic strength. This was proven by comparing their predictions with experimental data of the Hugoniot, with dynamic strength inferred from gas-gun experiments, and with the temperatures behind the shock front as calculated with hydro-codes. The effects of microscopic helium bubbles and of radiation-induced dislocation loops and voids on the equation of state were also determined and found to be small and in agreement with earlier theoretical predictions and recent diamond-anvil-cell experiments. However, these microscopic defects play an essential role in correctly predicting the dynamic strength for these nano-crystalline samples. These simulations also prove that the physics involved in shock compression experiments remains the same for macroscopic specimens used in gas-gun experiments down to micrometer samples to be employed in future NIF experiments. Furthermore, a practical way was discovered to reduce plastic instabilities in NIF target materials by introducing finely dispersed defects.

  14. Website for the Space Science Division

    NASA Astrophysics Data System (ADS)

    Schilling, James

    2002-01-01

    The Space Science Division at NASA Ames Research Center is dedicated to research in astrophysics, exobiology, advanced life support technologies, and planetary science. These research programs are structured around Astrobiology (the study of life in the universe and the chemical and physical forces and adaptions that influence life's origin, evolution, and destiny), and address some of the most fundamental questions pursued by science. These questions examine the origin of life and our place in the universe. Ames is recognized as a world leader in Astrobiology. In pursuing our mission in Astrobiology, Space Science Division scientists perform pioneering basic research and technology development.

  15. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  16. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  17. 2002 Chemical Engineering Division annual report.

    SciTech Connect

    Lewis, D.; Graziano, D.; Miller, J. F.

    2003-05-22

    The Chemical Engineering Division is one of eight engineering research divisions within Argonne National Laboratory, one of the U.S. government's oldest and largest research laboratories. The University of Chicago oversees the laboratory on behalf of the U.S. Department of Energy (DOE). Argonne's mission is to conduct basic scientific research, to operate national scientific facilities, to enhance the nation's energy resources, and to develop better ways to manage environmental problems. Argonne has the further responsibility of strengthening the nation's technology base by developing innovative technology and transferring it to industry. The Division is a diverse early-stage engineering organization, specializing in the treatment of spent nuclear fuel, development of advanced electrochemical power sources, and management of both high- and low-level nuclear wastes. Although this work is often indistinguishable from basic research, our efforts are directed toward the practical devices and processes that are covered by Argonne's mission. Additionally, the Division operates the Analytical Chemistry Laboratory; Environment, Safety, and Health Analytical Chemistry services; and Dosimetry and Radioprotection services, which provide a broad range of analytical services to Argonne and other organizations. The Division is multidisciplinary. Its people have formal training as ceramists; physicists; material scientists; electrical, mechanical, chemical, and nuclear engineers; and chemists. They have experience working in academia; urban planning; and the petroleum, aluminum, and automotive industries. Their skills include catalysis, ceramics, electrochemistry, metallurgy, nuclear magnetic resonance spectroscopy, and petroleum refining, as well as the development of nuclear waste forms, batteries, and high-temperature superconductors. Our wide-ranging expertise finds ready application in solving energy and environmental problems. Division personnel are frequently called on by

  18. Nuclear Chemistry Division annual report FY83

    SciTech Connect

    Struble, G.

    1983-01-01

    The purpose of the annual reports of the Nuclear Chemistry Division is to provide a timely summary of research activities pursued by members of the Division during the preceding year. Throughout, details are kept to a minimum; readers desiring additional information are encouraged to read the referenced documents or contact the authors. The Introduction presents an overview of the Division's scientific and technical programs. Next is a section of short articles describing recent upgrades of the Division's major facilities, followed by sections highlighting scientific and technical advances. These are grouped under the following sections: nuclear explosives diagnostics; geochemistry and environmental sciences; safeguards technology and radiation effect; and supporting fundamental science. A brief overview introduces each section. Reports on research supported by a particular program are generally grouped together in the same section. The last section lists the scientific, administrative, and technical staff in the Division, along with visitors, consultants, and postdoctoral fellows. It also contains a list of recent publications and presentations. Some contributions to the annual report are classified and only their abstracts are included in this unclassified portion of the report (UCAR-10062-83/1); the full article appears in the classified portion (UCAR-10062-83/2).

  19. | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) conducts and supports research to determine a person's risk of cancer and to find ways to reduce the risk. This knowledge is critical to making progress against cancer because risk varies over the lifespan as genetic and epigenetic changes can transform healthy tissue into invasive cancer.

  20. Solid State Division

    SciTech Connect

    Green, P.H.; Watson, D.M.

    1989-08-01

    This report contains brief discussions on work done in the Solid State Division of Oak Ridge National Laboratory. The topics covered are: Theoretical Solid State Physics; Neutron scattering; Physical properties of materials; The synthesis and characterization of materials; Ion beam and laser processing; and Structure of solids and surfaces. (LSP)

  1. The Problem with Division

    ERIC Educational Resources Information Center

    Pope, Sue

    2012-01-01

    Of the "big four", division is likely to regarded by many learners as "the odd one out", "the difficult one", "the one that is complicated", or "the scary one". It seems to have been that way "for ever", in the perception of many who have trodden the learning pathways through the world of number. But, does it have to be like this? Clearly the…

  2. Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) conducts and supports research to determine a person's risk of cancer and to find ways to reduce the risk. This knowledge is critical to making progress against cancer because risk varies over the lifespan as genetic and epigenetic changes can transform healthy tissue into invasive cancer.

  3. Anticrossproducts and cross divisions.

    PubMed

    de Leva, Paolo

    2008-01-01

    This paper defines, in the context of conventional vector algebra, the concept of anticrossproduct and a family of simple operations called cross or vector divisions. It is impossible to solve for a or b the equation axb=c, where a and b are three-dimensional space vectors, and axb is their cross product. However, the problem becomes solvable if some "knowledge about the unknown" (a or b) is available, consisting of one of its components, or the angle it forms with the other operand of the cross product. Independently of the selected reference frame orientation, the known component of a may be parallel to b, or vice versa. The cross divisions provide a compact and insightful symbolic representation of a family of algorithms specifically designed to solve problems of such kind. A generalized algorithm was also defined, incorporating the rules for selecting the appropriate kind of cross division, based on the type of input data. Four examples of practical application were provided, including the computation of the point of application of a force and the angular velocity of a rigid body. The definition and geometrical interpretation of the cross divisions stemmed from the concept of anticrossproduct. The "anticrossproducts of axb" were defined as the infinitely many vectors x(i) such that x(i)xb=axb. PMID:18423647

  4. Division XII Business Meetings

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm G.; Genova, Francoise; Anderson, Johannes; Federman, Steven R.; Gilmore, Alan C.; Nha, Il-Seong; Norris, Raymond P.; Robson, Ian E.; Stavinschi, Magda G.; Trimble, Virginia L.; Wainscoat, Richard J.

    2010-05-01

    Brief meetings were held to confirm the elections of the incoming Division President, Francoise Genova and Vice President, Ray Norris along with the Organizing Committee which will consist of the incoming Presidents of the 7 Commissions (5,6,14,41,46,50 and 55) plus additional nominated members. The incoming Organizing Committee will thus consist of:

  5. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  6. Chemical Technology Division. Annual technical report, 1995

    SciTech Connect

    Laidler, J.J.; Myles, K.M.; Green, D.W.; McPheeters, C.C.

    1996-06-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1995 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) methods for treatment of hazardous waste and mixed hazardous/radioactive waste; (3) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (4) processes for separating and recovering selected elements from waste streams, concentrating low-level radioactive waste streams with advanced evaporator technology, and producing {sup 99}Mo from low-enriched uranium; (5) electrometallurgical treatment of different types of spent nuclear fuel in storage at Department of Energy sites; and (6) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems.

  7. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms

  8. Division X: Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Nan, Ren-Dong; Taylor, Russ; Rodriguez, Luis F.; Chapman, Jessica; Dubner, Gloria; Garrett, Michael; Goss, W. Miller; Torrelles, Jose M.; Hirabayashi, Hisashi; Carilli, Chris; Hills, Richard; Shastri, Prajval

    2010-05-01

    The business meeting of Division X in the IAU 2009GA took place in three sessions during the day of August 6, 2009. The meeting, being well attended, started with the approval for the meeting agenda. Then the triennium reports were made in the first session by the president of Division X, Ren-Dong Nan, and by the chairs of three working groups: “Historic Radio Astronomy WG” by Wayne Orchiston, “Astrophysically Important Lines WG” by Masatoshi Ohishi, and “Global VLBI WG” by Tasso Tzioumis (proxy chair appointed by Steven Tingay). Afterwards, a dozen reports from observatories and worldwide significant projects have been presented in the second session. Business meeting of “Interference Mitigation WG” was located in the third session.

  9. Energy Systems Divisions

    NASA Technical Reports Server (NTRS)

    Applewhite, John

    2011-01-01

    This slide presentation reviews the JSC Energy Systems Divisions work in propulsion. Specific work in LO2/CH4 propulsion, cryogenic propulsion, low thrust propulsion for Free Flyer, robotic and Extra Vehicular Activities, and work on the Morpheus terrestrial free flyer test bed is reviewed. The back-up slides contain a chart with comparisons of LO2/LCH4 with other propellants, and reviewing the advantages especially for spacecraft propulsion.

  10. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.

  11. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  12. A report documenting the completion of the Los Alamos National Laboratory portion of the ASC level II milestone ""Visualization on the supercomputing platform

    SciTech Connect

    Ahrens, James P; Patchett, John M; Lo, Li - Ta; Mitchell, Christopher; Mr Marle, David; Brownlee, Carson

    2011-01-24

    This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider using CPU

  13. Earth Sciences Division

    NASA Astrophysics Data System (ADS)

    1991-06-01

    This Annual Report presents summaries of selected representative research activities grouped according to the principal disciplines of the Earth Sciences Division: Reservoir Engineering and Hydrogeology, Geology and Geochemistry, and Geophysics and Geomechanics. Much of the Division's research deals with the physical and chemical properties and processes in the earth's crust, from the partially saturated, low-temperature near-surface environment to the high-temperature environments characteristic of regions where magmatic-hydrothermal processes are active. Strengths in laboratory and field instrumentation, numerical modeling, and in situ measurement allow study of the transport of mass and heat through geologic media -- studies that now include the appropriate chemical reactions and the hydraulic-mechanical complexities of fractured rock systems. Of particular note are three major Division efforts addressing problems in the discovery and recovery of petroleum, the application of isotope geochemistry to the study of geodynamic processes and earth history, and the development of borehole methods for high-resolution imaging of the subsurface using seismic and electromagnetic waves. In 1989, a major DOE-wide effort was launched in the areas of Environmental Restoration and Waste Management. Many of the methods previously developed for and applied to deeper regions of the earth will, in the coming years, be turned toward process definition and characterization of the very shallow subsurface, where man-induced contaminants now intrude and where remedial action is required.

  14. Biorepositories | Division of Cancer Prevention

    Cancer.gov

    Carefully collected and controlled high-quality human biospecimens, annotated with clinical data and properly consented for investigational use, are available through the Division of Cancer Prevention Biorepositories listed in the charts below. Biorepositories Managed by the Division of Cancer Prevention Biorepositories Supported by the Division of Cancer Prevention Related Biorepositories | Information about accessing biospecimens collected from DCP-supported clinical trials and projects.

  15. Division Quilts: A Measurement Model

    ERIC Educational Resources Information Center

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  16. USACE DIVISION AND DISTRICT BOUNDARIES

    EPA Science Inventory

    The USACE Division and District Boundary data contains the delination of Corps Division and District boundaries. District and Division Boundaries are based on the US political and watershed boundaries. In the mid 1990's, WES created the file by digitizing the 1984 Civil Wor...

  17. Physics division annual report 2005.

    SciTech Connect

    Glover, J.; Physics

    2007-03-12

    trapped in an atom trap for the first time, a major milestone in an innovative search for the violation of time-reversal symmetry. New results from HERMES establish that strange quarks carry little of the spin of the proton and precise results have been obtained at JLAB on the changes in quark distributions in light nuclei. New theoretical results reveal that the nature of the surfaces of strange quark stars. Green's function Monte Carlo techniques have been extended to scattering problems and show great promise for the accurate calculation, from first principles, of important astrophysical reactions. Flame propagation in type 1A supernova has been simulated, a numerical process that requires considering length scales that vary by factors of eight to twelve orders of magnitude. Argonne continues to lead in the development and exploitation of the new technical concepts that will truly make an advanced exotic beam facility, in the words of NSAC, 'the world-leading facility for research in nuclear structure and nuclear astrophysics'. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for these new capabilities hold the keys to unlocking important secrets of nature. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  18. An account of Sandia's research booth at Supercomputing '92: A collaborative effort in high-performance computing and networking

    SciTech Connect

    Breckenridge, A.; Vahle, M.O.

    1993-03-01

    Supercomputing '92, a high-performance computing and communications conference was held, November 16--20, 1992 in Minneapolis, Minnesota. This paper documents the applications and technologies that were showcased in Sandia's research booth at that conference. In particular the demonstrations in high-performance networking, audio-visual applications in engineering, virtual reality, and supercomputing applications are all described.

  19. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  20. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  1. Chemical Technology Division annual technical report, 2001.

    SciTech Connect

    Lewis, D.; Gay, E. C.; Miller, J. C.; Boparai, A. S.

    2002-07-02

    The Chemical Technology Division (CMT) is one of eight engineering research divisions within Argonne National Laboratory, one of the U.S. government's oldest and largest research laboratories. The University of Chicago oversees the laboratory on behalf of the U.S. Department of Energy (DOE). Argonne's mission is to conduct basic scientific research, to operate national scientific facilities, to enhance the nation's energy resources, and to develop better ways to manage environmental problems. Argonne has the further responsibility of strengthening the nation's technology base by developing innovative technology and transferring it to industry. CMT is a diverse early-stage engineering organization, specializing in the treatment of spent nuclear fuel, development of advanced electrochemical power sources, and management of both high- and low-level nuclear wastes. Although this work is often indistinguishable from basic research, our efforts are directed toward the practical devices and processes that are covered by Argonne's mission. Additionally, the Division operates the Analytical Chemistry Laboratory and Environment, Safety, and Health Analytical Chemistry services, which provide a broad range of analytical services to Argonne and other organizations. The Division is multidisciplinary. Its people have formal training as ceramists; physicists; material scientists; electrical, mechanical, chemical, and nuclear engineers; and chemists. They have experience working in academia; urban planning; and the petroleum, aluminum, and automotive industries. Their skills include catalysis, ceramics, electrochemistry, metallurgy, nuclear magnetic resonance spectroscopy, and petroleum refining, as well as the development of nuclear waste forms, batteries, and high-temperature superconductors.

  2. NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)

    ScienceCinema

    None

    2014-06-16

    NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.

  3. Chemical Technology Division annual technical report, 1994

    SciTech Connect

    1995-06-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1994 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion; (3) methods for treatment of hazardous waste and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from waste streams, concentrating radioactive waste streams with advanced evaporator technology, and producing {sup 99}Mo from low-enriched uranium for medical applications; (6) electrometallurgical treatment of the many different types of spent nuclear fuel in storage at Department of Energy sites; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, molecular sieve structures, and impurities in scrap copper and steel; and the geochemical processes involved in mineral/fluid interfaces and water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  4. Developments in the simulation of compressible inviscid and viscous flow on supercomputers

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Buning, P. G.

    1985-01-01

    In anticipation of future supercomputers, finite difference codes are rapidly being extended to simulate three-dimensional compressible flow about complex configurations. Some of these developments are reviewed. The importance of computational flow visualization and diagnostic methods to three-dimensional flow simulation is also briefly discussed.

  5. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  6. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  7. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  8. Simulation Technology Research Division assessment of the IBM RISC SYSTEM/6000 Model 530 workstation

    SciTech Connect

    Valdez, G.D. ); Halbleib, J.A.; Kensek, R.P.; Lorence, L.J. )

    1990-11-01

    A workstation manufactured by International Business Machines Corporation (IBM) was loaned to the Simulation Technology Research Division for evaluation. We have found that these new UNIX workstations from IBM have superior cost to performance ratios compared to the CRAY supercomputers and Digital's VAX machines. Our appraisal of this workstation included floating-point performance, system and environment functionality, and cost effectiveness. Our assessment was based on a suite of radiation transport codes developed at Sandia that constitute the bulk of our division's computing workload. In this report, we also discuss our experience with features that are unique to this machine such as the AIX operating system and the XLF Fortran Compiler. The interoperability of the RS/6000 workstation with Sandia's network of CRAYs and VAXs was also assessed.

  9. Coracoacromial ligament division.

    PubMed

    Johansson, J E; Barrington, T W

    1984-01-01

    The object of this paper is to report on the findings of a retrospective study of 40 patients with 41 shoulders with persistent painful arc syndrome secondary to a chronic coracoacromial ligament inflammation who underwent simple coracoacromial ligament division at the Toronto East General and Orthopaedic Hospital between January 1973 and June 1979. Initial therapy was always nonoperative. Surgical intervention was reserved for patients who did not respond to conservative management and who had a painful arc with tenderness of the coracoacromial ligament. The aim of the coracoacromial ligament division was to relieve impingement by releasing the coracoacromial arch. Patients were carefully examined to rule out associated neck pathology, rotator cuff problems, and lesions of the acromioclavicular joint. Any patients with significantly large osteophytes under the anterior acromion were excluded. Forty patients (41 shoulders) were questioned and examined in followup. There were 29 males and 11 females. The ages ranged from 21 to 72 years (average 43.5 years). In 21 shoulders (51%), there was a history of trauma as the initiating factor. The follow-up ranged from 8 to 76 months (average 36.3 months). According to a described rating system, the results were satisfactory to excellent in 39 of 41 shoulders (95%) and unsatisfactory in two of 41 shoulders (5%). The back to work time ranged from 1 to 16 weeks (average 5.7 weeks).(ABSTRACT TRUNCATED AT 250 WORDS) PMID:6742288

  10. Benchmarking and tuning the MILC code on clusters and supercomputers

    SciTech Connect

    Steven A. Gottlieb

    2001-12-28

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  11. The Advanced Software Development and Commercialization Project

    SciTech Connect

    Gallopoulos, E. . Center for Supercomputing Research and Development); Canfield, T.R.; Minkoff, M.; Mueller, C.; Plaskacz, E.; Weber, D.P.; Anderson, D.M.; Therios, I.U. ); Aslam, S.; Bramley, R.; Chen, H.-C.; Cybenko, G.; Gallopoulos, E.; Gao, H.; Malony, A.; Sameh, A. . Center for Supercomputing Research

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time, on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.

  12. Physics division annual report 1999

    SciTech Connect

    Thayer, K., ed.; Physics

    2000-12-06

    This report summarizes the research performed in the past year in the Argonne Physics Division. The Division's programs include operation of ATLAS as a national heavy-ion user facility, nuclear structure and reaction research with beams of heavy ions, accelerator research and development especially in superconducting radio frequency technology, nuclear theory and medium energy nuclear physics. The Division took significant strides forward in its science and its initiatives for the future in the past year. Major progress was made in developing the concept and the technology for the future advanced facility of beams of short-lived nuclei, the Rare Isotope Accelerator. The scientific program capitalized on important instrumentation initiatives with key advances in nuclear science. In 1999, the nuclear science community adopted the Argonne concept for a multi-beam superconducting linear accelerator driver as the design of choice for the next major facility in the field a Rare Isotope Accelerator (WA) as recommended by the Nuclear Science Advisory Committee's 1996 Long Range Plan. Argonne has made significant R&D progress on almost all aspects of the design concept including the fast gas catcher (to allow fast fragmentation beams to be stopped and reaccelerated) that in large part defined the RIA concept the superconducting rf technology for the driver accelerator, the multiple-charge-state concept (to permit the facility to meet the design intensity goals with existing ion-source technology), and designs and tests of high-power target concepts to effectively deal with the full beam power of the driver linac. An NSAC subcommittee recommended the Argonne concept and set as tie design goal Uranium beams of 100-kwatt power at 400 MeV/u. Argonne demonstrated that this goal can be met with an innovative, but technically in-hand, design. The heavy-ion research program focused on GammaSphere, the premier facility for nuclear structure gamma-ray studies. One example of the

  13. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    PubMed Central

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-01-01

    Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems

  14. Failsafe mechanisms couple division and DNA replication in bacteria

    PubMed Central

    Arjes, Heidi A.; Kriel, Allison; Sorto, Nohemy A.; Shaw, Jared T.; Wang, Jue D.; Levin, Petra Anne

    2014-01-01

    Summary The past twenty years have seen tremendous advances in our understanding of the mechanisms underlying bacterial cytokinesis, particularly the composition of the division machinery and the factors controlling its assembly [1]. At the same time, we understand very little about the relationship between cell division and other cell cycle events in bacteria. Here we report that inhibiting division in Bacillus subtilis and Staphylococcus aureus quickly leads to an arrest in the initiation of new rounds of DNA replication followed by a complete arrest in cell growth. Arrested cells are metabolically active but unable to initiate new rounds of either DNA replication or division when shifted to permissive conditions. Inhibiting DNA replication results in entry into a similar quiescent state, in which cells are unable to resume growth or division when returned to permissive conditions. Our data suggest the presence of two failsafe mechanisms: one linking division to the initiation of DNA replication and another linking the initiation of DNA replication to division. These findings contradict the prevailing view of the bacterial cell cycle as a series of coordinated but uncoupled events. Importantly, the terminal nature of the cell cycle arrest validates the bacterial cell cycle machinery as an effective target for antimicrobial development. PMID:25176632

  15. Hurricane Modeling and Supercomputing: Can a global mesoscale model be useful in improving forecasts of tropical cyclogenesis?

    NASA Astrophysics Data System (ADS)

    Shen, B.; Tao, W.; Atlas, R.

    2007-12-01

    Hurricane modeling, along with guidance from observations, has been used to help construct hurricane theories since the 1960s. CISK (conditional instability of the second kind, Charney and Eliassen 1964; Ooyama 1964,1969) and WISHE (wind-induced surface heat exchange, Emanuel 1986) are among the well-known theories being used to understand hurricane intensification. For hurricane genesis, observations have indicated the importance of large-scale flows (e.g., the Madden-Julian Oscillation or MJO, Maloney and Hartmann, 2000) on the modulation of hurricane activity. Recent modeling studies have focused on the role of the MJO and Rossby waves (e.g., Ferreira and Schubert, 1996; Aivyer and Molinari, 2003) and/or the interaction of small-scale vortices (e.g., Holland 1995; Simpson et al. 1997; Hendrick et al. 2004), of which determinism could be also built by large-scale flows. The aforementioned studies suggest a unified view on hurricane formation, consisting of multiscale processes such as scale transition (e.g., from the MJO to Equatorial Rossby Waves and from waves to vortices), and scale interactions among vortices, convection, and surface heat and moisture fluxes. To depict the processes in the unified view, a high-resolution global model is needed. During the past several years, supercomputers have enabled the deployment of ultra-high resolution global models, obtaining remarkable forecasts of hurricane track and intensity (Atlas et al. 2005; Shen et al. 2006). In this work, hurricane genesis is investigated with the aid of a global mesoscale model on the NASA Columbia supercomputer by conducting numerical experiments on the genesis of six consecutive tropical cyclones (TCs) in May 2002. These TCs include two pairs of twin TCs in the Indian Ocean, Supertyphoon Hagibis in the West Pacific Ocean and Hurricane Alma in the East Pacific Ocean. It is found that the model is capable of predicting the genesis of five of these TCs about two to three days in advance. Our

  16. Physics Division annual report 2004.

    SciTech Connect

    Glover, J.

    2006-04-06

    lead in the development and exploitation of the new technical concepts that will truly make RIA, in the words of NSAC, ''the world-leading facility for research in nuclear structure and nuclear astrophysics''. The performance standards for new classes of superconducting cavities continue to increase. Driver linac transients and faults have been analyzed to understand reliability issues and failure modes. Liquid-lithium targets were shown to successfully survive the full-power deposition of a RIA beam. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for RIA holds the keys to unlocking important secrets of nature. The work described here shows how far we have come and makes it clear we know the path to meet these intellectual challenges. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.

  17. Activities: Understanding Division of Fractions.

    ERIC Educational Resources Information Center

    Bezuk, Nadine S.; Armstrong, Barbara E.

    1993-01-01

    Presents a series of five activities that introduce division of fractions through real-world situations. Discusses problems related to resurfacing a highway, painting dividing stripes on a highway, covering one area A with another area B, looking for patterns, and maximizing the result of a division problem. Includes reproducible worksheets. (MDH)

  18. Lightning Talks 2015: Theoretical Division

    SciTech Connect

    Shlachter, Jack S.

    2015-11-25

    This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.

  19. The Division of Family Roles.

    ERIC Educational Resources Information Center

    Ericksen, Julia A.; And Others

    1979-01-01

    Analyzes the marital role division between couples, in the Philadelphia area, concentrating on the division of household tasks, child care, and paid employment. Data support a marital power model with husband's income negatively related and wife's education positively related to shared roles. Blacks are more likely to share roles. (Author)

  20. Chemical Technology Division, Annual technical report, 1991

    SciTech Connect

    Not Available

    1992-03-01

    Highlights of the Chemical Technology (CMT) Division`s activities during 1991 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources; chemistry of superconducting oxides and other materials of interest with technological application; interfacial processes of importance to corrosion science, catalysis, and high-temperature superconductivity; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  1. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  2. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    NASA Astrophysics Data System (ADS)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  3. 17th Edition of TOP500 List of World's Fastest SupercomputersReseased

    SciTech Connect

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack J.; Simon,Horst D.

    2001-06-21

    17th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, GERMANY; KNOXVILLE, TENN.; BERKELEY, CALIF. In what has become a much-anticipated event in the world of high-performance computing, the 17th edition of the TOP500 list of the world's fastest supercomputers was released today (June 21). The latest edition of the twice-yearly ranking finds IBM as the leader in the field, with 40 percent in terms of installed systems and 43 percent in terms of total performance of all the installed systems. In second place in terms of installed systems is Sun Microsystems with 16 percent, while Cray Inc. retained second place in terms of performance (13 percent). SGI Inc. was third both with respect to systems with 63 (12.6 percent) and performance (10.2 percent).

  4. Improving the Availability of Supercomputer Job Input Data Using Temporal Replication

    SciTech Connect

    Wang, Chao; Zhang, Zhe; Ma, Xiaosong; Vazhkudai, Sudharshan S; Mueller, Frank

    2009-06-01

    Storage systems in supercomputers are a major reason for service interruptions. RAID solutions alone cannot provide sufficient protection as (1) growing average disk recovery times make RAID groups increasingly vulnerable to disk failures during reconstruction, and (2) RAID does not help with higher-level faults such failed I/O nodes. This paper presents a complementary approach based on the observation that files in the supercomputer scratch space are typically accessed by batch jobs whose execution can be anticipated. Therefore, we propose to transparently, selectively, and temporarily replicate 'active' job input data by coordinating the parallel file system with the batch job scheduler. We have implemented the temporal replication scheme in the popular Lustre parallel file system and evaluated it with real-cluster experiments. Our results show that the scheme allows for fast online data reconstruction, with a reasonably low overall space and I/O bandwidth overhead.

  5. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    SciTech Connect

    Meneses, Esteban; Ni, Xiang; Jones, Terry R; Maxwell, Don E

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  6. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    SciTech Connect

    Muller, U.A.; Baumle, B.; Kohler, P.; Gunzinger, A.; Guggenbuhl, W.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  7. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  8. Physics Division computer facilities

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  9. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    SciTech Connect

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  10. Division x: Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Taylor, Russ; Chapman, Jessica; Rendong, Nan; Carilli, Christopher; Giovannini, Gabriele; Hills, Richard; Hirabayashi, Hisashi; Jonas, Justin; Lazio, Joseph; Morganti, Raffaella; Rubio, Monica; Shastri, Prajval

    2012-04-01

    This triennium has seen a phenomenal investment in development of observational radio astronomy facilities in all parts of the globe at a scale that significantly impacts the international community. This includes both major enhancements such as the transition from the VLA to the EVLA in North America, and the development of new facilities such as LOFAR, ALMA, FAST, and Square Kilometre Array precursor telescopes in Australia and South Africa. These developments are driven by advances in radio-frequency, digital and information technologies that tremendously enhance the capabilities in radio astronomy. These new developments foreshadow major scientific advances driven by radio observations in the next triennium. We highlight these facility developments in section 3 of this report. A selection of science highlight from this triennium are summarized in section 2.

  11. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world’s fastest supercomputers

    NASA Astrophysics Data System (ADS)

    Michalak, S. E.; Harris, K. W.; Hengartner, N. W.; Takala, B. E.; Wender, S. A.

    2005-12-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q.

  12. Accelerator and Fusion Research Division 1989 summary of activities

    SciTech Connect

    Not Available

    1990-06-01

    This report discusses the research being conducted at Lawrence Berkeley Laboratory's Accelerator and Fusion Research Division. The main topics covered are: heavy-ion fusion accelerator research; magnetic fusion energy; advanced light source; center for x-ray optics; exploratory studies; high-energy physics technology; and bevalac operations.

  13. Evaluation of the education program for Supercomputing `95

    SciTech Connect

    Caldwell, G.; Abbott, G.

    1995-12-31

    Evaluation of the SC `95 Education Program indicated a very high level of satisfaction with all aspects of the program. Teachers viewed the hands-on sessions and the opportunity to network with other education professionals as the most valuable aspects of the program. Longer and a greater number of grade-appropriate hands-on lessons were requested for next year`s education program. Several suggestions related to programmatic issues for inclusion in future education programs were made by teachers attending SC `95. These include: greater variety of topics for K--5 teachers, a C++ session, repeat sessions for hot topics such as JAVA, and additional sessions on assessment and evaluation. In addition, survey respondents requested structured, small group sessions in which experts present information related to topics such as grant writing, formulating lesson plans, and dealing with technology issues as related to educational reform. If the purpose of the SC Education Program is in the advancement in applying and educating the nation`s youth in the power of computational science, then submissions for papers, panels, and hands-on sessions should be critically evaluated with that in mind. One suggestion for future planning includes offering sessions which are consistent with the grade and experience levels of teachers who will be attending the conference, such as more sessions for K-5 teachers. Before accepting sessions for presentation, consideration might also be given to what format (i.e., lecture, hands-on, small group discussion, etc.) would be appropriate to facilitate implementation of these programs in the classroom. As computational science and the use of technology in the classroom matures the SC Education Program, needs to be reexamined to target to provide information not available locally to the education community.

  14. Physics division annual report - October 2000.

    SciTech Connect

    Thayer, K.

    2000-10-16

    This report summarizes the research performed in the past year in the Argonne Physics Division. The Division's programs include operation of ATLAS as a national heavy-ion user facility, nuclear structure and reaction research with beams of heavy ions, accelerator research and development especially in superconducting radio frequency technology, nuclear theory and medium energy nuclear physics. The Division took significant strides forward in its science and its initiatives for the future in the past year. Major progress was made in developing the concept and the technology for the future advanced facility of beams of short-lived nuclei, the Rare Isotope Accelerator. The scientific program capitalized on important instrumentation initiatives with key advances in nuclear science. In 1999, the nuclear science community adopted the Argonne concept for a multi-beam superconducting linear accelerator driver as the design of choice for the next major facility in the field a Rare Isotope Accelerator (RIA) as recommended by the Nuclear Science Advisory Committee's 1996 Long Range Plan. Argonne has made significant R&D progress on almost all aspects of the design concept including the fast gas catcher (to allow fast fragmentation beams to be stopped and reaccelerated) that in large part, defined the RIA concept the superconducting rf technology for the driver accelerator, the multiple-charge-state concept (to permit the facility to meet the design intensity goals with existing ion-source technology), and designs and tests of high-power target concepts to effectively deal with the full beam power of the driver linac. An NSAC subcommittee recommended the Argonne concept and set as tie design goal Uranium beams of 100-kwatt power at 400 MeV/u. Argonne demonstrated that this goal can be met with an innovative, but technically in-hand, design.

  15. Chemical Technology Division annual technical report, 1993

    SciTech Connect

    Battles, J.E.; Myles, K.M.; Laidler, J.J.; Green, D.W.

    1994-04-01

    Chemical Technology (CMT) Division this period, conducted research and development in the following areas: advanced batteries and fuel cells; fluidized-bed combustion and coal-fired magnetohydrodynamics; treatment of hazardous waste and mixed hazardous/radioactive waste; reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; separating and recovering transuranic elements, concentrating radioactive waste streams with advanced evaporators, and producing {sup 99}Mo from low-enriched uranium; recovering actinide from IFR core and blanket fuel in removing fission products from recycled fuel, and disposing removal of actinides in spent fuel from commercial water-cooled nuclear reactors; and physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, molecular sieve structures, thin-film diamond surfaces, effluents from wood combustion, and molten silicates; and the geochemical processes involved in water-rock interactions. The Analytical Chemistry Laboratory in CMT also provides a broad range of analytical chemistry support.

  16. Time-division SQUID multiplexers

    NASA Astrophysics Data System (ADS)

    Irwin, K. D.; Vale, L. R.; Bergren, N. E.; Deiker, S.; Grossman, E. N.; Hilton, G. C.; Nam, S. W.; Reintsema, C. D.; Rudman, D. A.; Huber, M. E.

    2002-02-01

    SQUID multiplexers make it possible to build arrays of thousands of low-temperature bolometers and microcalorimeters based on superconducting transition-edge sensors with a manageable number of readout channels. We discuss the technical tradeoffs between proposed time-division multiplexer and frequency-division multiplexer schemes and motivate our choice of time division. Our first-generation SQUID multiplexer is now in use in an astronomical instrument. We describe our second-generation SQUID multiplexer, which is based on a new architecture that significantly reduces the dissipation of power at the first stage, allowing thousands of SQUIDs to be operated at the base temperature of a cryostat. .

  17. Physics division annual report 2006.

    SciTech Connect

    Glover, J.; Physics

    2008-02-28

    This report highlights the activities of the Physics Division of Argonne National Laboratory in 2006. The Division's programs include the operation as a national user facility of ATLAS, the Argonne Tandem Linear Accelerator System, research in nuclear structure and reactions, nuclear astrophysics, nuclear theory, investigations in medium-energy nuclear physics as well as research and development in accelerator technology. The mission of nuclear physics is to understand the origin, evolution and structure of baryonic matter in the universe--the core of matter, the fuel of stars, and the basic constituent of life itself. The Division's research focuses on innovative new ways to address this mission.

  18. Environmental Sciences Division annual progress report for period ending September 30, 1982. Environmental Sciences Division Publication No. 2090. [Lead abstract

    SciTech Connect

    Not Available

    1983-04-01

    Separate abstracts were prepared for 12 of the 14 sections of the Environmental Sciences Division annual progress report. The other 2 sections deal with educational activities. The programs discussed deal with advanced fuel energy, toxic substances, environmental impacts of various energy technologies, biomass, low-level radioactive waste management, the global carbon cycle, and aquatic and terrestrial ecology. (KRM)

  19. High Energy Physics division semiannual report of research activities, January 1, 1998--June 30, 1998.

    SciTech Connect

    Ayres, D. S.; Berger, E. L.; Blair, R.; Bodwin, G. T.; Drake, G.; Goodman, M. C.; Guarino, V.; Klasen, M.; Lagae, J.-F.; Magill, S.; May, E. N.; Nodulman, L.; Norem, J.; Petrelli, A.; Proudfoot, J.; Repond, J.; Schoessow, P. V.; Sinclair, D. K.; Spinka, H. M.; Stanek, R.; Underwood, D.; Wagner, R.; White, A. R.; Yokosawa, A.; Zachos, C.

    1999-03-09

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1998 through June 30, 1998. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included.

  20. High Energy Physics Division. Semiannual report of research activities, January 1, 1995--June 30, 1995

    SciTech Connect

    Wagner, R.; Schoessow, P.; Talaga, R.

    1995-12-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1995-July 31, 1995. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  1. High Energy Physics Division semiannual report of research activities, July 1, 1992--December 30, 1992

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1993-07-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1992--December 30, 1992. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  2. High Energy Physics Division semiannual report of research activities, January 1, 1996--June 30, 1996

    SciTech Connect

    Norem, J.; Rezmer, R.; Wagner, R.

    1997-07-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1 - June 30, 1996. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. List of Division publications and colloquia are included.

  3. High Energy Physics Division semiannual report of research activities, January 1, 1994--June 30, 1994

    SciTech Connect

    Not Available

    1994-09-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1994-June 30, 1994. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  4. High Energy Physics Division semiannual report of research activities, January 1, 1992--June 30, 1992

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1992-11-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1992--June 30, 1992. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  5. High Energy Physics Division semiannual report of research activities, July 1, 1994--December 31, 1994

    SciTech Connect

    Wagner, R.; Schoessow, P.; Talaga, R.

    1995-04-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1994--December 31, 1994. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  6. High Energy Physics Division semiannual report of research activities, January 1, 1993--June 30, 1993

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1993-12-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1993--June 30, 1993. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  7. High Energy Physics Division semiannual report of research activities, July 1, 1993--December 31, 1993

    SciTech Connect

    Wagner, R.; Moonier, P.; Schoessow, P.; Talaga, R.

    1994-05-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1993--December 31, 1993. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  8. High Energy Physics Division semiannual report of research activities July 1, 1997 - December 31, 1997.

    SciTech Connect

    Norem, J.; Rezmer, R.; Schuur, C.; Wagner, R.

    1998-08-11

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1997--December 31, 1997. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included.

  9. High Energy Physics Division semiannual report of research activities, July 1, 1991--December 31, 1991

    SciTech Connect

    Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R.

    1992-04-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1991--December 31, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  10. Division II: Sun and Heliosphere

    NASA Astrophysics Data System (ADS)

    Melrose, Donald B.; Martinez Pillet, Valentin; Webb, David F.; Bougeret, Jean-Louis; Klimchuk, James A.; Kosovichev, Alexander; van Driel-Gesztelyi, Lidia; von Steiger, Rudolf

    2010-05-01

    This report is on activities of the Division at the General Assembly in Rio de Janeiro. Summaries of scientific activities over the past triennium have been published in Transactions A, see Melrose et al. (2008), Klimchuk et al. (2008), Martinez Pillet et al. (2008) and Bougeret et al. (2008). The business meeting of the three Commissions were incorporated into the business meeting of the Division. This report is based in part on minutes of the business meeting, provided by the Secretary of the Division, Lidia van Driel-Gesztelyi, and it also includes reports provided by the Presidents of the Commissions (C10, C12, C49) and of the Working Groups (WGs) in the Division.

  11. Division 1137 property control system

    SciTech Connect

    Pastor, D.J.

    1982-01-01

    An automated data processing property control system was developed by Mobile and Remote Range Division 1137. This report describes the operation of the system and examines ways of using it in operational planning and control.

  12. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1984-07-01

    E (Experimental Physics) Division carries out basic and applied research in atomic and nuclear physics, in materials science, and in other areas related to the missions of the Laboratory. Some of the activities are cooperative efforts with other divisions of the Laboratory, and, in a few cases, with other laboratories. Many of the experiments are directly applicable to problems in weapons and energy, some have only potential applied uses, and others are in pure physics. This report presents abstracts of papers published by E (Experimental Physics) Division staff members between July 1983 and June 1984. In addition, it lists the members of the scientific staff of the division, including visitors and students, and some of the assignments of staff members on scientific committees. A brief summary of the budget is included.

  13. Mitochondrial division in Caenorhabditis elegans.

    PubMed

    Gandre, Shilpa; van der Bliek, Alexander M

    2007-01-01

    The study of mitochondrial division proteins has largely focused on yeast and mammalian cells. We describe methods to use Caenorhabditis elegans as an alternative model for studying mitochondrial division, taking advantage of the many wonderful resources provided by the C. elegans community. Our methods are largely based on manipulation of gene expression using classic and molecular genetic techniques combined with fluorescence microscopy. Some biochemical methods are also included. As antibodies become available, these biochemical methods are likely to become more sophisticated. PMID:18314747

  14. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1981-07-01

    This report describes some of the activities in E (Experimental Physics) Division during the past year. E-Division carries out research and development in areas related to the missions of the Laboratory. Many of the activities are in pure and applied atomic and nuclear physics and in material science. In addition this report describes work on accelerators, microwaves, plasma diagnostics, determination of atmospheric oxygen and of nitrogen in tissue.

  15. E-Division activities report

    SciTech Connect

    Barschall, H.H.

    1983-07-01

    This report describes some of the activities in E (Experimental Physics) Division during the past year. E-division carries out research and development in areas related to the missions of the Laboratory. Many of the activities are in pure and applied atomic and nuclear physics and in materials science. In addition, this report describes development work on accelerators and on instrumentation for plasma diagnostics, nitrogen exchange rates in tissue, and breakdown in gases by microwave pulses.

  16. ISOTOPE HYDROLOGY LABORATORY (WATER QUALITY MANAGEMENT BRANCH, WATER SUPPLY AND WATER RESOURCES DIVISION, NRMRL)

    EPA Science Inventory

    The mission of NRMRL's Water Supply and Water Resources Division's Isotope Hydrology Laboratory (IHL) is to resolve environmental hydrology problems through research and application of naturally occurring isotopes.The emergent field of isotope hydrology follows advances in anal...

  17. Accelerator & Fusion Research Division: 1993 Summary of activities

    SciTech Connect

    Chew, J.

    1994-04-01

    The Accelerator and Fusion Research Division (AFRD) is not only one of the largest scientific divisions at LBL, but also the one of the most diverse. Major efforts include: (1) investigations in both inertial and magnetic fusion energy; (2) operation of the Advanced Light Source, a state-of-the-art synchrotron radiation facility; (3) exploratory investigations of novel radiation sources and colliders; (4) research and development in superconducting magnets for accelerators and other scientific and industrial applications; and (5) ion beam technology development for nuclear physics and for industrial and biomedical applications. Each of these topics is discussed in detail in this book.

  18. Accelerator and Fusion Research Division: 1993 Summary of activities

    NASA Astrophysics Data System (ADS)

    Chew, J.

    1994-04-01

    The Accelerator and Fusion Research Division (AFRD) is not only one of the largest scientific divisions at LBL, but also one of the most diverse. Major efforts include: (1) investigations in both inertial and magnetic fusion energy; (2) operation of the Advanced Light Source, a state-of-the-art synchrotron radiation facility; (3) exploratory investigations of novel radiation sources and colliders; (4) research and development in superconducting magnets for accelerators and other scientific and industrial applications; and (5) ion beam technology development for nuclear physics and for industrial and biomedical applications. Each of these topics is discussed in detail in this book.

  19. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP v1.0) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    NASA Astrophysics Data System (ADS)

    Gasper, F.; Goergen, K.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.; Kollet, S.

    2014-10-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing nonlinear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP v1.0) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm using the OASIS suite of external couplers, and require memory and load balancing considerations in the exchange of the coupling fields between different component models and the allocation of computational resources, respectively. Using the advanced profiling and tracing tool Scalasca to determine an optimum load balancing leads to a 19% speedup. In massively parallel supercomputer environments, the coupler OASIS-MCT is recommended, which resolves memory limitations that may be significant in case of very large computational domains and exchange fields as they occur in these specific test cases and in many applications in terrestrial research. However, model I/O and initialization in the petascale range still require major attention, as they constitute true big data challenges in light of future exascale computing resources. Based on a factor-two speedup due to compiler optimizations, a refactored coupling interface using OASIS-MCT and an optimum load balancing, the problem size in a weak scaling study can be increased by a factor of 64 from 512 to 32 768 processes while maintaining parallel efficiencies above 80% for the component models.

  20. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  1. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  2. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  3. Development of the general interpolants method for the CYBER 200 series of supercomputers

    NASA Technical Reports Server (NTRS)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  4. Learning about supercomputers on a microcomputer with no keyboard: a science museum exhibit

    SciTech Connect

    Stoddard, M.; Buzbee, B.L.

    1984-01-01

    A microcomputer exhibit was developed to acquaint visitors of the Los Alamos National Laboratory's Bradbury Science Museum with supercomputers and computer-graphics applications. The exhibit is highly interactive, yet the visitor uses only the touch panel of the CD 110 microcomputer. The museum environment presented many constraints to the development team, yet the five minute exhibit has been extremely popular with visitors. Design details of how each constraint was dealt with to produce a motivating and instructional exhibit are provided. Although the program itself deals with a subject area primarily applicable to Los Alamos, the design features are transferrable to other courseware where motivational and learning aspects are of equal importance.

  5. Structural analysis of shallow shells on the CRAY Y-MP supercomputer

    NASA Astrophysics Data System (ADS)

    Qatu, M. S.; Bataineh, A. M.

    1992-10-01

    Structural analysis of shallow shells is performed and relatively accurate displacements and stresses are obtained. An energy method, which is an extension of the Ritz method, is used in the analysis. Algebraic polynomials are used as displacement functions. The numerical problems which resulted in inaccurate stresses in previous publications are improved by making use of symmetry and performing the computations on a supercomputer which has 29-digit double-precision arithmatics. Curvature effects upon deflections and stress resultants of shallow shells with cantilever and 'semi-cantilever' boundaries are studied.

  6. The transition of a real-time single-rotor helicopter simulation program to a supercomputer

    NASA Technical Reports Server (NTRS)

    Martinez, Debbie

    1995-01-01

    This report presents the conversion effort and results of a real-time flight simulation application transition to a CONVEX supercomputer. Enclosed is a detailed description of the conversion process and a brief description of the Langley Research Center's (LaRC) flight simulation application program structure. Currently, this simulation program may be configured to represent Sikorsky S-61 helicopter (a five-blade, single-rotor, commercial passenger-type helicopter) or an Army Cobra helicopter (either the AH-1 G or AH-1 S model). This report refers to the Sikorsky S-61 simulation program since it is the most frequently used configuration.

  7. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer

    PubMed Central

    Ellingson, Sally R.; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C.

    2013-01-01

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm. PMID:24729746

  8. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  9. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  10. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  11. A New Hydrodynamic Model for Numerical Simulation of Interacting Galaxies on Intel Xeon Phi Supercomputers

    NASA Astrophysics Data System (ADS)

    Kulikov, Igor; Chernykh, Igor; Tutukov, Alexander

    2016-05-01

    This paper presents a new hydrodynamic model of interacting galaxies based on the joint solution of multicomponent hydrodynamic equations, first moments of the collisionless Boltzmann equation and the Poisson equation for gravity. Using this model, it is possible to formulate a unified numerical method for solving hyperbolic equations. This numerical method has been implemented for hybrid supercomputers with Intel Xeon Phi accelerators. The collision of spiral and disk galaxies considering the star formation process, supernova feedback and molecular hydrogen formation is shown as a simulation result.

  12. Beyond Cookies: Understanding Various Division Models

    ERIC Educational Resources Information Center

    Jong, Cindy; Magruder, Robin

    2014-01-01

    Having a deeper understanding of division derived from multiple models is of great importance for teachers and students. For example, students will benefit from a greater understanding of division contexts as they study long division, fractions, and division of fractions. The purpose of this article is to build on teachers' and students'…

  13. Laboratory Astrophysics Division of The AAS (LAD)

    NASA Astrophysics Data System (ADS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-10-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  14. Laboratory Astrophysics Division of the AAS (LAD)

    NASA Technical Reports Server (NTRS)

    Salama, Farid; Drake, R. P.; Federman, S. R.; Haxton, W. C.; Savin, D. W.

    2012-01-01

    The purpose of the Laboratory Astrophysics Division (LAD) is to advance our understanding of the Universe through the promotion of fundamental theoretical and experimental research into the underlying processes that drive the Cosmos. LAD represents all areas of astrophysics and planetary sciences. The first new AAS Division in more than 30 years, the LAD traces its history back to the recommendation from the scientific community via the White Paper from the 2006 NASA-sponsored Laboratory Astrophysics Workshop. This recommendation was endorsed by the Astronomy and Astrophysics Advisory Committee (AAAC), which advises the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE) on selected issues within the fields of astronomy and astrophysics that are of mutual interest and concern to the agencies. In January 2007, at the 209th AAS meeting, the AAS Council set up a Steering Committee to formulate Bylaws for a Working Group on Laboratory Astrophysics (WGLA). The AAS Council formally established the WGLA with a five-year mandate in May 2007, at the 210th AAS meeting. From 2008 through 2012, the WGLA annually sponsored Meetings in-a-Meeting at the AAS Summer Meetings. In May 2011, at the 218th AAS meeting, the AAS Council voted to convert the WGLA, at the end of its mandate, into a Division of the AAS and requested draft Bylaws from the Steering Committee. In January 2012, at the 219th AAS Meeting, the AAS Council formally approved the Bylaws and the creation of the LAD. The inaugural gathering and the first business meeting of the LAD were held at the 220th AAS meeting in Anchorage in June 2012. You can learn more about LAD by visiting its website at http://lad.aas.org/ and by subscribing to its mailing list.

  15. Chemical technology division: Annual technical report 1987

    SciTech Connect

    Not Available

    1988-05-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1987 are presented. In this period, CMT conducted research and development in the following areas: (1) high-performance batteries--mainly lithium-alloy/metal sulfide and sodium/sulfur; (2) aqueous batteries (lead-acid, nickel/iron, etc.); (3) advanced fuel cells with molten carbonate or solid oxide electrolytes; (4) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants and the technology for fluidized-bed combustion; (5) methods for the electromagnetic continuous casting of steel sheet and for the purification of ferrous scrap; (6) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (7) nuclear technology related to a process for separating and recovering transuranic elements from nuclear waste, the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor, and waste management; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for liquids and vapors at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; the thermochemistry of various minerals; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be the major user of the technical support provided by the Analytical Chemistry Laboratory at ANL. 54 figs., 9 tabs.

  16. Chemical Technology Division annual technical report, 1986

    SciTech Connect

    Not Available

    1987-06-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1986 are presented. In this period, CMT conducted research and development in areas that include the following: (1) high-performance batteries - mainly lithium-alloy/metal sulfide and sodium/sulfur; (2) aqueous batteries (lead-acid, nickel/iron, etc.); (3) advanced fuel cells with molten carbonate or solid oxide electrolytes; (4) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants, the technology for fluidized-bed combustion, and a novel concept for CO/sub 2/ recovery from fossil fuel combustion; (5) methods for recovery of energy from municipal waste; (6) methods for the electromagnetic continuous casting of steel sheet; (7) techniques for treatment of hazardous waste such as reactive metals and trichloroethylenes; (8) nuclear technology related to waste management, a process for separating and recovering transuranic elements from nuclear waste, and the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor; and (9) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of catalytic hydrogenation and catalytic oxidation; materials chemistry for associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, surface science, and catalysis; the thermochemistry of zeolites and related silicates; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be the major user of the technical support provided by the Analytical Chemistry Laboratory at ANL. 127 refs., 71 figs., 8 tabs.

  17. Chemical Technology Division annual technical report 1989

    SciTech Connect

    Not Available

    1990-03-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1989 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including high-performance batteries (mainly lithium/iron sulfide and sodium/metal chloride), aqueous batteries (lead-acid and nickel/iron), and advanced fuel cells with molten carbonate and solid oxide electrolytes: (2) coal utilization, including the heat and seed recovery technology for coal-fired magnetohydrodynamics plants and the technology for fluidized-bed combustion; (3) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (4) nuclear technology related to a process for separating and recovering transuranic elements from nuclear waste and for producing {sup 99}Mo from low-enriched uranium targets, the recovery processes for discharged fuel and the uranium blanket in a sodium-cooled fast reactor (the Integral Fast Reactor), and waste management; and (5) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for superconducting oxides and associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; and the geochemical processes responsible for trace-element migration within the earth's crust. The Division continued to be administratively responsible for and the major user of the Analytical Chemistry Laboratory at Argonne National Laboratory (ANL).

  18. The ASCI Network for SC '98: Dense Wave Division Multiplexing for Distributed and Distance Computing

    SciTech Connect

    Adams, R.L.; Butman, W.; Martinez, L.G.; Pratt, T.J.; Vahle, M.O.

    1999-06-01

    This document highlights the DISCOM's Distance computing and communication team activities at the 1998 Supercomputing conference in Orlando, Florida. This conference is sponsored by the IEEE and ACM. Sandia National Laboratories, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory have participated in this conference for ten years. For the last three years, the three laboratories have a joint booth at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives. The DISCOM communication team uses the forum to demonstrate and focus communications and networking developments. At SC '98, DISCOM demonstrated the capabilities of Dense Wave Division Multiplexing. We exhibited an OC48 ATM encryptor. We also coordinated the other networking activities within the booth. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support overall strategies in ATM networking.

  19. Development and uses of upper-division conceptual assessments

    NASA Astrophysics Data System (ADS)

    Wilcox, Bethany R.; Caballero, Marcos D.; Baily, Charles; Sadaghiani, Homeyra; Chasteen, Stephanie V.; Ryan, Qing X.; Pollock, Steven J.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] The use of validated conceptual assessments alongside conventional course exams to measure student learning in introductory courses has become standard practice in many physics departments. These assessments provide a more standard measure of certain learning goals, allowing for comparisons of student learning across instructors, semesters, institutions, and pedagogies. Researchers at the University of Colorado Boulder have developed several similar assessments designed to target the more advanced physics of upper-division classical mechanics, electrostatics, quantum mechanics, and electrodynamics courses. Here, we synthesize the existing research on our upper-division assessments and discuss some of the barriers and challenges associated with their development, validation, and implementation as well as some of the strategies we have used to overcome these barriers.

  20. Divisions of geologic time-major chronostratigraphic and geochronologic units

    USGS Publications Warehouse

    U.S. Geological Survey Geologic Names Committee

    2010-01-01

    Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and is calibrated in years. Over the years, the development of new dating methods and the refinement of previous methods have stimulated revisions to geologic time scales. Advances in stratigraphy and geochronology require that any time scale be periodically updated. Therefore, Divisions of Geologic Time, which shows the major chronostratigraphic (position) and geochronologic (time) units, is intended to be a dynamic resource that will be modified to include accepted changes of unit names and boundary age estimates. This fact sheet is a modification of USGS Fact Sheet 2007-3015 by the U.S. Geological Survey Geologic Names Committee.

  1. Building an academic colorectal division.

    PubMed

    Koltun, Walter A

    2014-06-01

    Colon and rectal surgery is fully justified as a valid subspecialty within academic university health centers, but such formal recognition at the organizational level is not the norm. Creating a colon and rectal division within a greater department of surgery requires an unfailing commitment to academic concepts while promulgating the improvements that come in patient care, research, and teaching from a specialty service perspective. The creation of divisional identity then opens the door for a strategic process that will grow the division even more as well as provide benefits to the institution within which it resides. The fundamentals of core values, academic commitment, and shared success reinforced by receptive leadership are critical. Attention to culture, commitment, collaboration, control, cost, and compensation leads to a successful academic division of colon and rectal surgery. PMID:25067922

  2. Building an Academic Colorectal Division

    PubMed Central

    Koltun, Walter A.

    2014-01-01

    Colon and rectal surgery is fully justified as a valid subspecialty within academic university health centers, but such formal recognition at the organizational level is not the norm. Creating a colon and rectal division within a greater department of surgery requires an unfailing commitment to academic concepts while promulgating the improvements that come in patient care, research, and teaching from a specialty service perspective. The creation of divisional identity then opens the door for a strategic process that will grow the division even more as well as provide benefits to the institution within which it resides. The fundamentals of core values, academic commitment, and shared success reinforced by receptive leadership are critical. Attention to culture, commitment, collaboration, control, cost, and compensation leads to a successful academic division of colon and rectal surgery. PMID:25067922

  3. Chemical Technology Division, Annual technical report, 1991

    SciTech Connect

    Not Available

    1992-03-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1991 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous and mixed hazardous/radioactive waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources; chemistry of superconducting oxides and other materials of interest with technological application; interfacial processes of importance to corrosion science, catalysis, and high-temperature superconductivity; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  4. Effects of Polyhydroxybutyrate Production on Cell Division

    NASA Technical Reports Server (NTRS)

    Miller, Kathleen; Rahman, Asif; Hadi, Masood Z.

    2015-01-01

    Synthetic biological engineering can be utilized to aide the advancement of improved long-term space flight. The potential to use synthetic biology as a platform to biomanufacture desired equipment on demand using the three dimensional (3D) printer on the International Space Station (ISS) gives long-term NASA missions the flexibility to produce materials as needed on site. Polyhydroxybutyrates (PHBs) are biodegradable, have properties similar to plastics, and can be produced in Escherichia coli using genetic engineering. Using PHBs during space flight could assist mission success by providing a valuable source of biomaterials that can have many potential applications, particularly through 3D printing. It is well documented that during PHB production E. coli cells can become significantly elongated. The elongation of cells reduces the ability of the cells to divide and thus to produce PHB. I aim to better understand cell division during PHB production, through the design, building, and testing of synthetic biological circuits, and identify how to potentially increase yields of PHB with FtsZ overexpression, the gene responsible for cell division. Ultimately, an increase in the yield will allow more products to be created using the 3D printer on the ISS and beyond, thus aiding astronauts in their missions.

  5. 49 CFR 177.841 - Division 6.1 and Division 2.3 materials.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Division 6.1 and Division 2.3 materials. 177.841... PUBLIC HIGHWAY Loading and Unloading § 177.841 Division 6.1 and Division 2.3 materials. (See also § 177... by other appropriate method, and the marking removed. (b) (c) Division 2.3 (poisonous gas)...

  6. GSFC Heliophysics Science Division 2009 Science Highlights

    NASA Technical Reports Server (NTRS)

    Strong, Keith T.; Saba, Julia L. R.; Strong, Yvonne M.

    2009-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2009, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 299 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include: Leading science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Leading the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Providing access to measurements from the Heliophysics Great Observatory through our Science Information Systems; and Communicating science results to the public and inspiring the next generation of scientists and explorers.

  7. GSFC Heliophysics Science Division 2008 Science Highlights

    NASA Technical Reports Server (NTRS)

    Gilbert, Holly R.; Strong, Keith T.; Saba, Julia L. R.; Firestone, Elaine R.

    2009-01-01

    This report is intended to record and communicate to our colleagues, stakeholders, and the public at large about heliophysics scientific and flight program achievements and milestones for 2008, for which NASA Goddard Space Flight Center's Heliophysics Science Division (HSD) made important contributions. HSD comprises approximately 261 scientists, technologists, and administrative personnel dedicated to the goal of advancing our knowledge and understanding of the Sun and the wide variety of domains that its variability influences. Our activities include Lead science investigations involving flight hardware, theory, and data analysis and modeling that will answer the strategic questions posed in the Heliophysics Roadmap; Lead the development of new solar and space physics mission concepts and support their implementation as Project Scientists; Provide access to measurements from the Heliophysics Great Observatory through our Science Information Systems, and Communicate science results to the public and inspire the next generation of scientists and explorers.

  8. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Technical Reports Server (NTRS)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  9. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    NASA Astrophysics Data System (ADS)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  10. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  11. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  12. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    SciTech Connect

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby; Worley, Patrick H

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  13. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    SciTech Connect

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  14. ASCI Red -- Experiences and lessons learned with a massively parallel teraFLOP supercomputer

    SciTech Connect

    Christon, M.A.; Crawford, D.A.; Hertel, E.S.; Peery, J.S.; Robinson, A.C.

    1997-06-01

    The Accelerated Strategic Computing Initiative (ASCI) program involves Sandia, Los Alamos and Lawrence Livermore National Laboratories. At Sandia National Laboratories, ASCI applications include large deformation transient dynamics, shock propagation, electromechanics, and abnormal thermal environments. In order to resolve important physical phenomena in these problems, it is estimated that meshes ranging from 10{sup 6} to 10{sup 9} grid points will be required. The ASCI program is relying on the use of massively parallel supercomputers initially capable of delivering over 1 TFLOPs to perform such demanding computations. The ASCI Red machine at Sandia National Laboratories consists of over 4,500 computational nodes with a peak computational rate of 1.8 TFLOPs, 567 GBytes of memory, and 2 TBytes of disk storage. Regardless of the peak FLOP rate, there are many issues surrounding the use of massively parallel supercomputers in a production environment. These issues include parallel I/O, mesh generation, visualization, archival storage, high-bandwidth networking and the development of parallel algorithms. In order to illustrate these issues and their solution with respect to ASCI Red, demonstration calculations of time-dependent buoyancy-dominated plumes, electromechanics, and shock propagation will be presented.

  15. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    SciTech Connect

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  16. Vector performance analysis of three supercomputers - Cray-2, Cray Y-MP, and ETA10-Q

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod A.

    1989-01-01

    Results are presented of a series of experiments to study the single-processor performance of three supercomputers: Cray-2, Cray Y-MP, and ETA10-Q. The main object of this study is to determine the impact of certain architectural features on the performance of modern supercomputers. Features such as clock period, memory links, memory organization, multiple functional units, and chaining are considered. A simple performance model is used to examine the impact of these features on the performance of a set of basic operations. The results of implementing this set on these machines for three vector lengths and three memory strides are presented and compared. For unit stride operations, the Cray Y-MP outperformed the Cray-2 by as much as three times and the ETA10-Q by as much as four times for these operations. Moreover, unlike the Cray-2 and ETA10-Q, even-numbered strides do not cause a major performance degradation on the Cray Y-MP. Two numerical algorithms are also used for comparison. For three problem sizes of both algorithms, the Cray Y-MP outperformed the Cray-2 by 43 percent to 68 percent and the ETA10-Q by four to eight times.

  17. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    NASA Astrophysics Data System (ADS)

    Gasper, F.; Goergen, K.; Kollet, S.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.

    2014-06-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved.

  18. Advanced planetary studies

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Results of planetary advanced studies and planning support provided by Science Applications, Inc. staff members to Earth and Planetary Exploration Division, OSSA/NASA, for the period 1 February 1981 to 30 April 1982 are summarized. The scope of analyses includes cost estimation, planetary missions performance, solar system exploration committee support, Mars program planning, Galilean satellite mission concepts, and advanced propulsion data base. The work covers 80 man-months of research. Study reports and related publications are included in a bibliography section.

  19. Advanced fossil energy utilization

    SciTech Connect

    Shekhawat, D.; Berry, D.; Spivey, J.; Pennline, H.; Granite, E.

    2010-01-01

    This special issue of Fuel is a selection of papers presented at the symposium ‘Advanced Fossil Energy Utilization’ co-sponsored by the Fuels and Petrochemicals Division and Research and New Technology Committee in the 2009 American Institute of Chemical Engineers (AIChE) Spring National Meeting Tampa, FL, on April 26–30, 2009.

  20. Home | Division of Cancer Prevention

    Cancer.gov

    Our Research The Division of Cancer Prevention (DCP) conducts and supports research to determine a person's risk of cancer and to find ways to reduce the risk. This knowledge is critical to making progress against cancer because risk varies over the lifespan as genetic and epigenetic changes can transform healthy tissue into cancer. |

  1. Environmental Transport Division: 1979 report

    SciTech Connect

    Murphy, C.E. Jr.; Schubert, J.F.; Bowman, W.W.; Adams, S.E.

    1980-03-01

    During 1979, the Environmental Transport Division (ETD) of the Savannah River Laboratory conducted atmospheric, terrestrial, aquatic, and marine studies, which are described in a series of articles. Separate abstracts were prepared for each. Publications written about the 1979 research are listed at the end of the report.

  2. Synthetic Division and Matrix Factorization

    ERIC Educational Resources Information Center

    Barabe, Samuel; Dubeau, Franc

    2007-01-01

    Synthetic division is viewed as a change of basis for polynomials written under the Newton form. Then, the transition matrices obtained from a sequence of changes of basis are used to factorize the inverse of a bidiagonal matrix or a block bidiagonal matrix.

  3. Psychological Sciences Division: 1985 Programs.

    ERIC Educational Resources Information Center

    Office of Naval Research, Washington, DC. Psychological Sciences Div.

    This booklet describes research carried out under sponsorship of the Psychological Sciences Division of the U.S. Office of Naval Research during Fiscal Year 1985. The booklet is divided into three programmatic research areas: (1) Engineering Psychology; (2) Personnel and Training; and (3) Group Psychology. Each program is described by an overview…

  4. Manpower Division Looks at CETA

    ERIC Educational Resources Information Center

    American Vocational Journal, 1977

    1977-01-01

    The Manpower Division at the American Vocational Association (AVA) convention in Houston was concerned about youth unemployment and about the Comprehensive Employment and Training Act (CETA)--its problems and possibilities. The panel discussion reported here reveals some differing perspectives and a general consensus--that to improve their role in…

  5. 78 FR 17431 - Antitrust Division

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-21

    ...) of the Act on July 30, 2001 (66 FR 39336). The last notification was filed with the Department on... January 2, 2013 (78 FR 117). Patricia A. Brink, Director of Civil Enforcement, Antitrust Division. BILLING...--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on February 22, 2013,...

  6. Preschool Children's Informal Division Concepts.

    ERIC Educational Resources Information Center

    Blevins-Knabe, Belinda

    The purpose of this study was to examine the division procedures of preschool children to determine whether such procedures involved one-to-one correspondence. Large and small numerosity trials were included so that the amount of effort and ease of using other procedures would vary. Odd and even number trials were included to determine whether…

  7. 77 FR 54611 - Antitrust Division

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... Section 6(b) of the Act on June 30, 2000 (65 FR 40693). The last notification was filed with the... on June 8, 2012 (77 FR 34067). Patricia A. Brink, Director of Civil Enforcement, Antitrust Division...; Tiburon Associates, Inc., Alexandria, VA; Streamline Automation, LLC (dba C3 Propulsion), Huntsville,...

  8. International Division Regional Advisers' Reports

    ERIC Educational Resources Information Center

    Johnson, Jenny

    2006-01-01

    An Advisers primary job is to nominate candidates for the five annual ID awards; this involves working with the five International Division award coordinators. Advisers also submit an annual report on activities in their country/ region to their Area Coordinators who, in turn, report on educational technology activities in their Areas. In the…

  9. 75 FR 70031 - Antitrust Division

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-16

    ... Antitrust Division Notice Pursuant to the National Cooperative Research and Production Act of 1993--Open... National Cooperative Research and Production Act of 1993, 15 U.S.C. 4301 et seq. (``the Act''), Open Axis... branding program based upon distinctive trademarks to create high customer awareness of, demand for,...

  10. Computational fluid dynamics: Complex flows requiring supercomputers. (Latest citations from the INSPEC: Information services for the Physics and Engineering Communities database). Published Search

    SciTech Connect

    Not Available

    1993-08-01

    The bibliography contains citations concerning computational fluid dynamics (CFD), a new technology in computational science for complex flow simulations using supercomputers. Citations discuss the design, analysis, and performance evaluation of aircraft, rockets and missiles, and automobiles. References to supercomputers, array processes, parallel processes, and computational software packages are included. (Contains 250 citations and includes a subject term index and title list.)

  11. Young Kim, PhD | Division of Cancer Prevention

    Cancer.gov

    Young S Kim, PhD, joined the Division of Cancer Prevention at the National Cancer Institute in 1998 as a Program Director who oversees and monitors NCI grants in the area of Nutrition and Cancer. She serves as an expert in nutrition, molecular biology, and genomics as they relate to cancer prevention. Dr. Kim assists with research initiatives that will advance nutritional science and lead to human health benefits. |

  12. Circadian clocks and cell division

    PubMed Central

    2010-01-01

    Evolution has selected a system of two intertwined cell cycles: the cell division cycle (CDC) and the daily (circadian) biological clock. The circadian clock keeps track of solar time and programs biological processes to occur at environmentally appropriate times. One of these processes is the CDC, which is often gated by the circadian clock. The intermeshing of these two cell cycles is probably responsible for the observation that disruption of the circadian system enhances susceptibility to some kinds of cancer. The core mechanism underlying the circadian clockwork has been thought to be a transcription and translation feedback loop (TTFL), but recent evidence from studies with cyanobacteria, synthetic oscillators and immortalized cell lines suggests that the core circadian pacemaking mechanism that gates cell division in mammalian cells could be a post-translational oscillator (PTO). PMID:20890114

  13. Health, Safety, and Environment Division

    SciTech Connect

    Wade, C

    1992-01-01

    The primary responsibility of the Health, Safety, and Environmental (HSE) Division at the Los Alamos National Laboratory is to provide comprehensive occupational health and safety programs, waste processing, and environmental protection. These activities are designed to protect the worker, the public, and the environment. Meeting these responsibilities requires expertise in many disciplines, including radiation protection, industrial hygiene, safety, occupational medicine, environmental science and engineering, analytical chemistry, epidemiology, and waste management. New and challenging health, safety, and environmental problems occasionally arise from the diverse research and development work of the Laboratory, and research programs in HSE Division often stem from these applied needs. These programs continue but are also extended, as needed, to study specific problems for the Department of Energy. The results of these programs help develop better practices in occupational health and safety, radiation protection, and environmental science.

  14. Operational numerical weather prediction on a GPU-accelerated cluster supercomputer

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas

    2016-04-01

    The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.

  15. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    SciTech Connect

    Gallarno, George; Rogers, James H; Maxwell, Don E

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  16. The application of "Helios" supercomputer in radiation safety studies for the IFMIF

    NASA Astrophysics Data System (ADS)

    Kondo, Keitaro; Fischer, Ulrich; Gröschel, Friedrich; Heinzel, Volker; Leichtle, Dieter; Serikov, Arkady

    2014-06-01

    The HELIOS supercomputer system at International Fusion Energy Research Centre, Aomori, Japan has been extensively utilized in radiation safety studies for the International Fusion Materials Irradiation Facility (IFMIF). This paper is focusing on the neutronic analysis to support the layout of the high energy beam transport (HEBT) section of IFMIF. The McDeLicious-11 Monte Carlo code, which is an enhancement to MCNP5, was utilized in order to simulate the neutron generation in the IFMIF lithium target through d-Li(d,xn) reactions and the R2Smesh approach was utilized to evaluate the ambient dose distribution after shutdown. The necessary thickness of the biological shielding and additional local shielding for the HEBT section has been evaluated. The accessibility of HEBT rooms and a necessary cooling time is discussed based on the result of shutdown dose analysis.

  17. LARGE-SCALE SIMULATION OF BEAM DYNAMICS IN HIGH INTENSITY ION LINACS USING PARALLEL SUPERCOMPUTERS

    SciTech Connect

    R. RYNE; J. QIANG

    2000-08-01

    In this paper we present results of using parallel supercomputers to simulate beam dynamics in next-generation high intensity ion linacs. Our approach uses a three-dimensional space charge calculation with six types of boundary conditions. The simulations use a hybrid approach involving transfer maps to treat externally applied fields (including rf cavities) and parallel particle-in-cell techniques to treat the space-charge fields. The large-scale simulation results presented here represent a three order of magnitude improvement in simulation capability, in terms of problem size and speed of execution, compared with typical two-dimensional serial simulations. Specific examples will be presented, including simulation of the spallation neutron source (SNS) linac and the Low Energy Demonstrator Accelerator (LEDA) beam halo experiment.

  18. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  19. A study of inelastic behavior of reinforced concrete shells using supercomputers

    SciTech Connect

    Min Chang Shik.

    1992-01-01

    This study intends to relate the point-wise limit state design method to the ultimate behavior of reinforced-concrete shells as a unified approach. A vector algorithm developed on a Cray Y-MP supercomputer is suitable to implement an inelastic finite element program. A bending inelastic finite element model, which incorporates the rotating cracking model by layering the subdivided elements, is developed. Effects of large-deformation, tension-stiffening, and dowel action are ignored, and the bond between the concrete and steel and among the subdivided layers is assumed to be perfect. The biaxial behavior of uncracked concrete and the uniaxial behavior of a cracked element are assumed to be linear elastic in compression and in tension. Based on this analysis, the current design method provides adequate strength against an ultimate failure.

  20. Advanced Pacemaker

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Synchrony, developed by St. Jude Medical's Cardiac Rhythm Management Division (formerly known as Pacesetter Systems, Inc.) is an advanced state-of-the-art implantable pacemaker that closely matches the natural rhythm of the heart. The companion element of the Synchrony Pacemaker System is the Programmer Analyzer APS-II which allows a doctor to reprogram and fine tune the pacemaker to each user's special requirements without surgery. The two-way communications capability that allows the physician to instruct and query the pacemaker is accomplished by bidirectional telemetry. APS-II features 28 pacing functions and thousands of programming combinations to accommodate diverse lifestyles. Microprocessor unit also records and stores pertinent patient data up to a year.

  1. Water Resources Division training catalog

    USGS Publications Warehouse

    Hotchkiss, W.R.; Foxhoven, L.A.

    1984-01-01

    The National Training Center provides technical and management sessions nesessary for the conductance of the U.S. Geological Survey 's training programs. This catalog describes the facilities and staff at the Lakewood Training Center and describes Water Resources Division training courses available through the center. In addition, the catalog describes the procedures for gaining admission, formulas for calculating fees, and discussion of course evaluations. (USGS)

  2. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    NASA Astrophysics Data System (ADS)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  3. The Arabidopsis Cell Division Cycle

    PubMed Central

    Gutierrez, Crisanto

    2009-01-01

    Plant cells have evolved a complex circuitry to regulate cell division. In many aspects, the plant cell cycle follows a basic strategy similar to other eukaryotes. However, several key issues are unique to plant cells. In this chapter, both the conserved and unique cellular and molecular properties of the plant cell cycle are reviewed. In addition to division of individual cells, the specific characteristic of plant organogenesis and development make that cell proliferation control is of primary importance during development. Therefore, special attention should be given to consider plant cell division control in a developmental context. Proper organogenesis depends on the formation of different cell types. In plants, many of the processes leading to cell differentiation rely on the occurrence of a different cycle, termed the endoreplication cycle, whereby cells undergo repeated full genome duplication events in the absence of mitosis and increase their ploidy. Recent findings are focusing on the relevance of changes in chromatin organization for a correct cell cycle progression and, conversely, in the relevance of a correct functioning of chromatin remodelling complexes to prevent alterations in both the cell cycle and the endocycle. PMID:22303246

  4. Analytical Chemistry Division's sample transaction system

    SciTech Connect

    Stanton, J.S.; Tilson, P.A.

    1980-10-01

    The Analytical Chemistry Division uses the DECsystem-10 computer for a wide range of tasks: sample management, timekeeping, quality assurance, and data calculation. This document describes the features and operating characteristics of many of the computer programs used by the Division. The descriptions are divided into chapters which cover all of the information about one aspect of the Analytical Chemistry Division's computer processing.

  5. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  6. High Energy Physics Division semiannual report of research activities. Semi-annual progress report, July 1, 1995--December 31, 1995

    SciTech Connect

    Norem, J.; Bajt, D.; Rezmer, R.; Wagner, R.

    1996-10-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1995 - December 31, 1995. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included.

  7. Mitotic spindle rotation and mode of cell division in the developing telencephalon.

    PubMed

    Haydar, Tarik F; Ang, Eugenius; Rakic, Pasko

    2003-03-01

    The mode of neural stem cell division in the forebrain proliferative zones profoundly influences neocortical growth by regulating the number and diversity of neurons and glia. Long-term time-lapse multiphoton microscopy of embryonic mouse cortex reveals new details of the complex three-dimensional rotation and oscillation of the mitotic spindle before stem cell division. Importantly, the duration and amplitude of spindle movement predicts and specifies the eventual mode of mitotic division. These technological advances have provided dramatic data and insights into the kinetics of neural stem cell division by elucidating the involvement of spindle rotation in selection of the cleavage plane and the mode of neural stem cell division that together determine the size of the mammalian neocortex. PMID:12589023

  8. Chemical Technology Division annual technical report, 1990

    SciTech Connect

    Not Available

    1991-05-01

    Highlights of the Chemical Technology (CMT) Division's activities during 1990 are presented. In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for coal- fired magnetohydrodynamics and fluidized-bed combustion; (3) methods for recovery of energy from municipal waste and techniques for treatment of hazardous organic waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for a high-level waste repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams, concentrating plutonium solids in pyrochemical residues by aqueous biphase extraction, and treating natural and process waters contaminated by volatile organic compounds; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (IFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials in environments simulating those of fission and fusion energy systems. The Division also has a program in basic chemistry research in the areas of fluid catalysis for converting small molecules to desired products; materials chemistry for superconducting oxides and associated and ordered solutions at high temperatures; interfacial processes of importance to corrosion science, high-temperature superconductivity, and catalysis; and the geochemical processes responsible for trace-element migration within the earth's crust. The Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the scientific and engineering programs at Argonne National Laboratory (ANL). 66 refs., 69 figs., 6 tabs.

  9. Activities of the Structures Division, Lewis Research Center

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose of the NASA Lewis Research Center, Structures Division's 1990 Annual Report is to give a brief, but comprehensive, review of the technical accomplishments of the Division during the past calendar year. The report is organized topically to match the Center's Strategic Plan. Over the years, the Structures Division has developed the technology base necessary for improving the future of aeronautical and space propulsion systems. In the future, propulsion systems will need to be lighter, to operate at higher temperatures and to be more reliable in order to achieve higher performance. Achieving these goals is complex and challenging. Our approach has been to work cooperatively with both industry and universities to develop the technology necessary for state-of-the-art advancement in aeronautical and space propulsion systems. The Structures Division consists of four branches: Structural Mechanics, Fatigue and Fracture, Structural Dynamics, and Structural Integrity. This publication describes the work of the four branches by three topic areas of Research: (1) Basic Discipline; (2) Aeropropulsion; and (3) Space Propulsion. Each topic area is further divided into the following: (1) Materials; (2) Structural Mechanics; (3) Life Prediction; (4) Instruments, Controls, and Testing Techniques; and (5) Mechanisms. The publication covers 78 separate topics with a bibliography containing 159 citations. We hope you will find the publication interesting as well as useful.

  10. Annual Advances in Cancer Prevention Lecture | Division of Cancer Prevention

    Cancer.gov

    2016 Keynote Lecture Polyvalent Vaccines Targeting Oncogenic Driver Pathways A special keynote lecture became part of the NCI Summer Curriculum in Cancer Prevention in 2000. This lecture will be held on Thursday, July 21, 2016 at 1:30pm at Masur Auditorium, Building 10, NIH Main Campus, Bethesda, MD. This year’s keynote speaker is Dr. Mary L. (Nora) Disis, MD. |

  11. Annual Advances in Cancer Prevention Lecture | Division of Cancer Prevention

    Cancer.gov

    2015 Keynote Lecture HPV Vaccination: Preventing More with Less A special keynote lecture became part of the NCI summer Curriculum in Cancer Prevention in 2000. This lecture will be held on Thursday, July 23, 2015 at 3:00pm at Masur Auditorium, Building 10, NIH Main Campus, Bethesda, MD. This year’s keynote speaker is Dr. Douglas Lowy, NCI Acting Director. |

  12. Emerging facets of plastid division regulation.

    PubMed

    Basak, Indranil; Møller, Simon Geir

    2013-02-01

    Plastids are complex organelles that are integrated into the plant host cell where they differentiate and divide in tune with plant differentiation and development. In line with their prokaryotic origin, plastid division involves both evolutionary conserved proteins and proteins of eukaryotic origin where the host has acquired control over the process. The plastid division apparatus is spatially separated between the stromal and the cytosolic space but where clear coordination mechanisms exist between the two machineries. Our knowledge of the plastid division process has increased dramatically during the past decade and recent findings have not only shed light on plastid division enzymology and the formation of plastid division complexes but also on the integration of the division process into a multicellular context. This review summarises our current knowledge of plastid division with an emphasis on biochemical features, the functional assembly of protein complexes and regulatory features of the overall process. PMID:22965912

  13. 1. Oblique view of 215 Division Street, looking southwest, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Oblique view of 215 Division Street, looking southwest, showing front (east) facade and north side, 213 Division Street is visible at left and 217 Division Street appears at right - 215 Division Street (House), Rome, Floyd County, GA

  14. Description of the programs and facilities of the Physics Division

    SciTech Connect

    Not Available

    1992-10-01

    The major emphasis of our experimental nuclear physics research is in Heavy-Ion Physics, centered at the recently completed ATLAS facility. ATLAS is a designated National User Facility and is based on superconducting radio-frequency technology developed in the Physics Division. In addition, the Division has strong programs in Medium-Energy Physics and in Weak-Interaction Physics as well as in accelerator development. Our nuclear theory research spans a wide range of interests including nuclear dynamics with subnucleonic degrees of freedom, dynamics of many-nucleon systems, nuclear structure, and heavy-ion interactions. This research makes contact with experimental research programs in intermediate-energy and heavy-ion physics, both within the Division and on the national scale. The Atomic Physics program, the largest of which is accelerator-based, primarily uses ATLAS, a 5-MV Dynamitron accelerator and a highly stable 150-kV accelerator. A synchrotron-based atomic physics program has recently been initiated with current research with the National Synchrotron Light Source in preparation for a program at the Advanced Photon Source, at Argonne. The principal interests of the Atomic Physics program are in the interactions of fast atomic and molecular ions with solids and gases and in the laser spectroscopy of exotic species. The program is currently being expanded to take advantage of the unique research opportunities in synchrotron-based research that will present themselves when the Advanced Photon Source comes on line at Argonne. These topics are discussed briefly in this report.

  15. ARC3 is a stromal Z-ring accessory protein essential for plastid division

    PubMed Central

    Maple, Jodi; Vojta, Lea; Soll, Jurgen; Møller, Simon G

    2007-01-01

    In plants, chloroplast division is an integral part of development, and these vital organelles arise by binary fission from pre-existing cytosolic plastids. Chloroplasts arose by endosymbiosis and although they have retained elements of the bacterial cell division machinery to execute plastid division, they have evolved to require two functionally distinct forms of the FtsZ protein and have lost elements of the Min machinery required for Z-ring placement. Here, we analyse the plastid division component accumulation and replication of chloroplasts 3 (ARC3) and show that ARC3 forms part of the stromal plastid division machinery. ARC3 interacts specifically with AtFtsZ1, acting as a Z-ring accessory protein and defining a unique function for this family of FtsZ proteins. ARC3 is involved in division site placement, suggesting that it might functionally replace MinC, representing an important advance in our understanding of the mechanism of chloroplast division and the evolution of the chloroplast division machinery. PMID:17304239

  16. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  17. Instrumentation and Controls Division Overview: Sensors Development for Harsh Environments at Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Zeller, Mary V.; Lei, Jih-Fen

    2002-01-01

    The Instrumentation and Controls Division is responsible for planning, conducting and directing basic and applied research on advanced instrumentation and controls technologies for aerospace propulsion and power applications. The Division's advanced research in harsh environment sensors, high temperature high power electronics, MEMS (microelectromechanical systems), nanotechnology, high data rate optical instrumentation, active and intelligent controls, and health monitoring and management will enable self-feeling, self-thinking, self-reconfiguring and self-healing Aerospace Propulsion Systems. These research areas address Agency challenges to deliver aerospace systems with reduced size and weight, and increased functionality and intelligence for future NASA missions in advanced aeronautics, economical space transportation, and pioneering space exploration. The Division also actively supports educational and technology transfer activities aimed at benefiting all humankind.

  18. NEN Division Funding Gap Analysis

    SciTech Connect

    Esch, Ernst I.; Goettee, Jeffrey D.; Desimone, David J.; Lakis, Rollin E.; Miko, David K.

    2012-09-05

    The work in NEN Division revolves around proliferation detection. The sponsor funding model seems to have shifted over the last decades. For the past three lustra, sponsors are mainly interested in funding ideas and detection systems that are already at a technical readiness level 6 (TRL 6 -- one step below an industrial prototype) or higher. Once this level is reached, the sponsoring agency is willing to fund the commercialization, implementation, and training for the systems (TRL 8, 9). These sponsors are looking for a fast turnaround (1-2 years) technology development efforts to implement technology. To support the critical national and international needs for nonprolifertion solutions, we have to maintain a fluent stream of subject matter expertise from the fundamental principals of radiation detection through prototype development all the way to the implementation and training of others. NEN Division has large funding gaps in the Valley of Death region. In the current competitive climate for nuclear nonproliferation projects, it is imminent to increase our lead in this field.

  19. Stochastic models for cell division

    NASA Astrophysics Data System (ADS)

    Stukalin, Evgeny; Sun, Sean

    2013-03-01

    The probability of cell division per unit time strongly depends of age of cells, i.e., time elapsed since their birth. The theory of cell populations in the age-time representation is systematically applied for modeling cell division for different spreads in generation times. We use stochastic simulations to address the same issue at the level of individual cells. Our approach unlike deterministic theory enables to analyze the size fluctuations of cell colonies at different growth conditions (in the absence and in the presence of cell death, for initially synchronized and asynchronous cell populations, for conditions of restricted growth). We find the simple quantitative relation between the asymptotic values of relative size fluctuations around mean values for initially synchronized cell populations under growth and the coefficients of variation of generation times. Effect of initial age distribution for asynchronous growth of cell cultures is also studied by simulations. The influence of constant cell death on fluctuations of sizes of cell populations is found to be essential even for small cell death rates, i.e., for realistic growth conditions. The stochastic model is generalized for biologically relevant case that involves both cell reproduction and cell differentiation.

  20. Structures Division 1994 Annual Report

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1994 are presented.

  1. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    SciTech Connect

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  2. News | Division of Cancer Prevention

    Cancer.gov

    News about scientific advances in cancer prevention, program activities, and new projects are included here in NCI press releases and fact sheets, articles from the NCI Cancer Bulletin, and Clinical Trial News from the NCI website.

  3. The John von Neumann Institute for Computing (NIC): A survey of its supercomputer facilities and its Europe-wide computational science activities

    NASA Astrophysics Data System (ADS)

    Attig, N.

    2006-03-01

    The John von Neumann Institute for Computing (NIC) at the Research Centre Jülich, Germany, is one of the leading supercomputing centres in Europe. Founded as a national centre in the mid-eighties it now provides more and more resources to European scientists. This happens within EU-funded projects (I3HP, DEISA) or Europe-wide scientific collaborations. Beyond these activities NIC started an initiative towards the new EU member states in summer 2004. Outstanding research groups are offered to exploit the supercomputers at NIC to accelerate their investigations on leading-edge technology. The article gives an overview of the organisational structure of NIC, its current supercomputer systems, and its user support. Transnational Access (TA) within I3HP is described as well as access by the initiative for new EU member states. The volume of these offers and the procedure of how to apply for supercomputer resources is introduced in detail.

  4. Chemical Technology Division annual technical report, 1992

    SciTech Connect

    Battles, J.E.; Myles, K.M.; Laidler, J.J.; Green, D.W.

    1993-06-01

    In this period, CMT conducted research and development in the following areas: (1) electrochemical technology, including advanced batteries and fuel cells; (2) technology for fluidized-bed combustion and coal-fired magnetohydrodynamics; (3) methods for treatment of hazardous waste, mixed hazardous/radioactive waste, and municipal solid waste; (4) the reaction of nuclear waste glass and spent fuel under conditions expected for an unsaturated repository; (5) processes for separating and recovering transuranic elements from nuclear waste streams, treating water contaminated with volatile organics, and concentrating radioactive waste streams; (6) recovery processes for discharged fuel and the uranium blanket in the Integral Fast Reactor (EFR); (7) processes for removal of actinides in spent fuel from commercial water-cooled nuclear reactors and burnup in IFRs; and (8) physical chemistry of selected materials (corium; Fe-U-Zr, tritium in LiAlO{sub 2} in environments simulating those of fission and fusion energy systems. The Division also conducts basic research in catalytic chemistry associated with molecular energy resources and novel` ceramic precursors; materials chemistry of superconducting oxides, electrified metal/solution interfaces, and molecular sieve structures; and the geochemical processes involved in water-rock interactions occurring in active hydrothermal systems. In addition, the Analytical Chemistry Laboratory in CMT provides a broad range of analytical chemistry support services to the technical programs at Argonne National Laboratory (ANL).

  5. Operational Characterization of Divisibility of Dynamical Maps.

    PubMed

    Bae, Joonwoo; Chruściński, Dariusz

    2016-07-29

    In this work, we show the operational characterization to the divisibility of dynamical maps in terms of the distinguishability of quantum channels. It is proven that the distinguishability of any pair of quantum channels does not increase under divisible maps, in which the full hierarchy of divisibility is isomorphic to the structure of entanglement between system and environment. This shows that (i) channel distinguishability is the operational quantity signifying (detecting) divisibility (indivisibility) of dynamical maps and (ii) the decision problem for the divisibility of maps is as hard as the separability problem in entanglement theory. We also provide the information-theoretic characterization to the divisibility of maps with conditional min-entropy. PMID:27517760

  6. Testing, analysis, and correction of the update operation of a parallel, multi-backend data-base supercomputer. Master's thesis

    SciTech Connect

    Williams, M.A.

    1992-03-01

    The Multi-Backend Database Supercomputer (MBDS) is designed to provide high-performance database management parallely for applications with very large and growing database. This thesis is a testing, analysis of and correction of the primary database operation UPDATE of MBDS. We provide an overview of the entire MBDS system and then focus on the parallel UPDATE operation in an attempt to discover and correct the deficiencies of the original UPDATE algorithm.

  7. AstroPhi: A code for complex simulation of the dynamics of astrophysical objects using hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Kulikov, I. M.; Chernykh, I. G.; Snytnikov, A. V.; Glinskiy, B. M.; Tutukov, A. V.

    2015-01-01

    We propose a new code named AstroPhi for simulation of the dynamics of astrophysical objects on hybrid supercomputers equipped with Intel Xenon Phi computation accelerators. The details of parallel implementation are described, as well as changes to the computational algorithm that facilitate efficient parallel implementation. A single Xeon Phi accelerator yielded 27-fold acceleration. The use of 32 Xeon Phi accelerators resulted in 94% parallel efficiency. Several collapse problems are simulated using the AstroPhi code.

  8. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  9. Asymmetrical division of Saccharomyces cerevisiae.

    PubMed Central

    Lord, P G; Wheals, A E

    1980-01-01

    The unequal division model proposed for budding yeast (L. H. Hartwell and M. W. Unger, J. Cell Biol. 75:422-435, 1977) was tested by bud scar analyses of steady-state exponential batch cultures of Saccharomyces cerevisiae growing at 30 degrees C at 19 different rates, which were obtained by altering the carbon source. The analyses involved counting the number of bud scars, determining the presence or absence of buds on at least 1,000 cells, and independently measuring the doubling times (gamma) by cell number increase. A number of assumptions in the model were tested and found to be in good agreement with the model. Maximum likelihood estimates of daughter cycle time (D), parent cycle time (P), and the budded phase (B) were obtained, and we concluded that asymmetrical division occurred at all growth rates tested (gamma, 75 to 250 min). D, P, and B are all linearly related to gamma, and D, P, and gamma converge to equality (symmetrical division) at gamma = 65 min. Expressions for the genealogical age distribution for asymmetrically dividing yeast cells were derived. The fraction of daughter cells in steady-state populations is e-alpha P, and the fraction of parent cells of age n (where n is the number of buds that a cell has produced) is (e-alpha P)n-1(1-e-alpha P)2, where alpha = IN2/gamma; thus, the distribution changes with growth rate. The frequency of cells with different numbers of bud scars (i.e., different genealogical ages) was determined for all growth rates, and the observed distribution changed with the growth rate in the manner predicted. In this haploid strain new buds formed adjacent to the previous buds in a regular pattern, but at slower growth rates the pattern was more irregular. The median volume of the cells and the volume at start in the cell cycle both increased at faster growth rates. The implications of these findings for the control of the cell cycle are discussed. PMID:6991494

  10. The Materials Division: A case study

    NASA Technical Reports Server (NTRS)

    Grisaffe, Salvatore J.; Lowell, Carl E.

    1989-01-01

    The Materials Division at NASA's Lewis Research Center has been engaged in a program to improve the quality of its output. The division, its work, and its customers are described as well as the methodologies developed to assess and improve the quality of the Division's staff and output. Examples of these methodologies are presented and evaluated. An assessment of current progress is also presented along with a summary of future plans.

  11. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    NASA Astrophysics Data System (ADS)

    Kennedy, J. A.; Kluth, S.; Mazzaferro, L.; Walker, Rodney

    2015-12-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

  12. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    SciTech Connect

    Helland, B.; Summers, B.G.

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  13. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    SciTech Connect

    Bethel, E Wes; Brugger, Eric

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources - the 'Big Iron.' Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be - that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  14. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    SciTech Connect

    Swaminarayan, Sriram; Germann, Timothy C; Kadau, Kai; Fossum, Gordon C

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  15. Simulating the Dynamics of Earth's Core: Using NCCS Supercomputers Speeds Calculations

    NASA Technical Reports Server (NTRS)

    2002-01-01

    If one wanted to study Earth's core directly, one would have to drill through about 1,800 miles of solid rock to reach liquid core-keeping the tunnel from collapsing under pressures that are more than 1 million atmospheres and then sink an instrument package to the bottom that could operate at 8,000 F with 10,000 tons of force crushing every square inch of its surface. Even then, several of these tunnels would probably be needed to obtain enough data. Faced with difficult or impossible tasks such as these, scientists use other available sources of information - such as seismology, mineralogy, geomagnetism, geodesy, and, above all, physical principles - to derive a model of the core and, study it by running computer simulations. One NASA researcher is doing just that on NCCS computers. Physicist and applied mathematician Weijia Kuang, of the Space Geodesy Branch, and his collaborators at Goddard have what he calls the,"second - ever" working, usable, self-consistent, fully dynamic, three-dimensional geodynamic model (see "The Geodynamic Theory"). Kuang runs his model simulations on the supercomputers at the NCCS. He and Jeremy Bloxham, of Harvard University, developed the original version, written in Fortran 77, in 1996.

  16. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  17. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    SciTech Connect

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  18. A user-friendly web portal for T-Coffee on supercomputers

    PubMed Central

    2011-01-01

    Background Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution. PMID:21569428

  19. Molecular Dynamics Simulations of Hot Spots and Detonation on the Roadrunner Supercomputer

    NASA Astrophysics Data System (ADS)

    Mniszewski, Susan; Cawkwell, Marc; Germann, Timothy

    2011-06-01

    The temporal and spatial scales intrinsic to a real detonating explosive are extremely difficult to capture using molecular dynamics (MD) simulations. Nevertheless, MD remains very attractive since it allows for the resolution of dynamic phenomena at the atomic scale. We have studied the effects of spherical voids on the build up to detonation in three dimensions (3D) in a model explosive using the reactive empirical bond order (REBO) potential for the A-B system. This force field is attractive because it has been shown to support a detonation while being simple, analytic, and short-ranged. The transition from 2D to 3D simulations was facilitated by our port of the REBO force field in the parallel MD code SPaSM to LANL's petaflop Roadrunner supercomputer based on previous work by Swaminarayan and Germann [T. C. Germann et al. Concurrency Computat.: Pract. Exper. 21, 2143 (2009)]. We will provide a detailed discussion of the challenges associated with computing interatomic forces on a hybrid Opteron/Cell BE computational architecture. We will compare and contrast our results in 3D from Roadrunner with earlier 2D simulations of hot-spot assisted detonations by Heim, Herring, and co-workers [S. D. Herring et al. Phys. Rev. B, 82, 214108 (2010)].

  20. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    NASA Astrophysics Data System (ADS)

    de Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu, Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-02-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  1. Monte Carlo photon transport on vector and parallel supercomputers: Final report

    SciTech Connect

    Martin, W.R.; Nowak, P.F.

    1986-12-01

    The University of Michigan has been investigating the implementation of vectorized and parallelized Monte Carlo algorithms for the analysis of photon transport in an inertially-confined fusion (ICF) plasma. The goal of this work is to develop and test Monte Carlo algorithms for vector/parallel supercomputers such as the Cray X-MP and Cray-2. Previous effort has resulted in the development of a vectorized photon transport code, named VPHOT, and a companion scalar code, named SPHOT, that performs the same analysis and is used for comparative purposes to assess the performance of the vectorized algorithm. A test problem, denoted the ICF test problem, has been created and tested with the VPHOT and SPHOT codes. By comparison with a reference LLNL calculation of the ICF test problem, the VPHOT/SPHOT codes have been verified to predict the correct results. Performance results with VPHOT versus SPHOT and the reference LLNL code have been reported previously and indicate that speedups in the range of 6 to 12 can be achieved with the vectorized algorithm versus the conventional scalar algorithm on the Cray X-MP. This report summarizes the progress made during the last year to continue the investigation of vectorized Monte Carlo (parameter studies, alternative vectorized algorithm, alternative target machines) and to extend the work into the area of parallel processing. 5 refs.

  2. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    SciTech Connect

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  3. Adapting NBODY4 with a GRAPE-6a Supercomputer for Web Access, Using NBodyLab

    NASA Astrophysics Data System (ADS)

    Johnson, V.; Aarseth, S.

    2006-07-01

    A demonstration site has been developed by the authors that enables researchers and students to experiment with the capabilities and performance of NBODY4 running on a GRAPE-6a over the web. NBODY4 is a sophisticated open-source N-body code for high accuracy simulations of dense stellar systems (Aarseth 2003). In 2004, NBODY4 was successfully tested with a GRAPE-6a, yielding an unprecedented low-cost tool for astrophysical research. The GRAPE-6a is a supercomputer card developed by astrophysicists to accelerate high accuracy N-body simulations with a cluster or a desktop PC (Fukushige et al. 2005, Makino & Taiji 1998). The GRAPE-6a card became commercially available in 2004, runs at 125 Gflops peak, has a standard PCI interface and costs less than 10,000. Researchers running the widely used NBODY6 (which does not require GRAPE hardware) can compare their own PC or laptop performance with simulations run on http://www.NbodyLab.org. Such comparisons may help justify acquisition of a GRAPE-6a. For workgroups such as university physics or astronomy departments, the demonstration site may be replicated or serve as a model for a shared computing resource. The site was constructed using an NBodyLab server-side framework.

  4. Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert

    2006-01-01

    It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.

  5. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  6. Architecture and design of a 500-MHz gallium-arsenide processing element for a parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.; Butner, Steven E.

    1991-01-01

    The design of the processing element of GASP, a GaAs supercomputer with a 500-MHz instruction issue rate and 1-GHz subsystem clocks, is presented. The novel, functionally modular, block data flow architecture of GASP is described. The architecture and design of a GASP processing element is then presented. The processing element (PE) is implemented in a hybrid semiconductor module with 152 custom GaAs ICs of eight different types. The effects of the implementation technology on both the system-level architecture and the PE design are discussed. SPICE simulations indicate that parts of the PE are capable of being clocked at 1 GHz, while the rest of the PE uses a 500-MHz clock. The architecture utilizes data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture indicates that an instruction execution rate of over 30,000 MIPS can be attained with 65 PEs.

  7. Exploiting desktop supercomputing for three-dimensional electron microscopy reconstructions using ART with blobs.

    PubMed

    Bilbao-Castro, J R; Marabini, R; Sorzano, C O S; García, I; Carazo, J M; Fernández, J J

    2009-01-01

    Three-dimensional electron microscopy allows direct visualization of biological macromolecules close to their native state. The high impact of this technique in the structural biology field is highly correlated with the development of new image processing algorithms. In order to achieve subnanometer resolution, the size and number of images involved in a three-dimensional reconstruction increase and so do computer requirements. New chips integrating multiple processors are hitting the market at a reduced cost. This high-integration, low-cost trend has just begun and is expected to bring real supercomputers to our laboratory desktops in the coming years. This paper proposes a parallel implementation of a computation-intensive algorithm for three-dimensional reconstruction, ART, that takes advantage of the computational power in modern multicore platforms. ART is a sophisticated iterative reconstruction algorithm that has turned out to be well suited for the conditions found in three-dimensional electron microscopy. In view of the performance obtained in this work, these modern platforms are expected to play an important role to face the future challenges in three-dimensional electron microscopy. PMID:18940260

  8. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    SciTech Connect

    Vranas, P; Soltz, R

    2006-10-19

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD.

  9. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    PubMed

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own? PMID:24807974

  10. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach. PMID:26631792

  11. Massively-parallel electrical-conductivity imaging of hydrocarbonsusing the Blue Gene/L supercomputer

    SciTech Connect

    Commer, M.; Newman, G.A.; Carazzone, J.J.; Dickens, T.A.; Green,K.E.; Wahrmund, L.A.; Willen, D.E.; Shiu, J.

    2007-05-16

    Large-scale controlled source electromagnetic (CSEM)three-dimensional (3D) geophysical imaging is now receiving considerableattention for electrical conductivity mapping of potential offshore oiland gas reservoirs. To cope with the typically large computationalrequirements of the 3D CSEM imaging problem, our strategies exploitcomputational parallelism and optimized finite-difference meshing. Wereport on an imaging experiment, utilizing 32,768 tasks/processors on theIBM Watson Research Blue Gene/L (BG/L) supercomputer. Over a 24-hourperiod, we were able to image a large scale marine CSEM field data setthat previously required over four months of computing time ondistributed clusters utilizing 1024 tasks on an Infiniband fabric. Thetotal initial data misfit could be decreased by 67 percent within 72completed inversion iterations, indicating an electrically resistiveregion in the southern survey area below a depth of 1500 m below theseafloor. The major part of the residual misfit stems from transmitterparallel receiver components that have an offset from the transmittersail line (broadside configuration). Modeling confirms that improvedbroadside data fits can be achieved by considering anisotropic electricalconductivities. While delivering a satisfactory gross scale image for thedepths of interest, the experiment provides important evidence for thenecessity of discriminating between horizontal and verticalconductivities for maximally consistent 3D CSEM inversions.

  12. Major Programs | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention supports major scientific collaborations, research networks, investigator-initiated grants, postdoctoral training, and specialized resources across the United States. |

  13. Children's Inventions for Multidigit Multiplication and Division.

    ERIC Educational Resources Information Center

    Caliandro, Christine Koller

    2000-01-01

    Describes an informal research activity in which third grade students invent their own algorithms for multidigit multiplication and division. Discusses teaching implications and action research ideas. (ASK)

  14. Photobase generator assisted pitch division

    NASA Astrophysics Data System (ADS)

    Gu, Xinyu; Bates, Christopher M.; Cho, Younjin; Kawakami, Takanori; Nagai, Tomoki; Ogata, Toshiyuki; Sundaresan, Arunkumar K.; Turro, Nicholas J.; Bristol, Robert; Zimmerman, Paul; Willson, C. Grant

    2010-04-01

    The drive to sustain the improvements in productivity that derive from following Moore's law has led the semiconductor industry to explore new technologies that enable production of smaller and smaller features on semiconductor device. Pitch division techniques and double exposure lithography are approaches that print features beyond the fundamental resolution limit of state-of-art lenses by modifying the lithographic process. This paper presents a new technique that enables pitch division in the printing of gratings using only a single exposure that is fully compatible with the current manufacturing tools. This technique employs a classical photoresist polymer together with a photoactive system that incorporates both a photoacid generator (PAG) and a photobase generator (PBG). The PBG is added to the resist formulation in higher molar concentration than the PAG, but has a base production rate that is slower than the acid production rate of the PAG. The PBG functions as a dose-dependent base quencher, which neutralizes the acid in high dose exposure regions but not in the low dose regions. This photoactive system can be exploited in the design of both positive tone and negative tone resist formulations that provide a developed image of a grating that is twice the frequency of the grating on the mask. A simulation of this process was performed for a 52 nm line and space pattern using PROLITH and customized codes. The results showed generation of a 26 nm half pitch relief image after development. Through this new technique, a 45 nm half pitch line and space pattern was experimentally achieved with a mask that produces a 90 nm half pitch aerial image. This corresponds to a k1 factor of 0.13. The principles, the materials design and the first lithographic evaluations of this system are reported.

  15. Advanced battery development

    NASA Astrophysics Data System (ADS)

    In order to promote national security by ensuring that the United States has an adequate supply of safe, assured, affordable, and environmentally acceptable energy, the Storage Batteries Division at Sandia National Laboratories (SNL), Albuquerque, is responsible for engineering development of advanced rechargeable batteries for energy applications. This effort is conducted within the Exploratory Battery Technology Development and Testing (ETD) Lead center, whose activities are coordinated by staff within the Storage Batteries Division. The ETD Project, directed by SNL, is supported by the U.S. Department of Energy, Office of Energy Systems Research, Energy Storage and Distribution Division (DOE/OESD). SNL is also responsible for technical management of the Electric Vehicle Advanced Battery Systems (EV-ABS) Development Project, which is supported by the U.S. Department of Energy's Office of Transportation Systems (OTS). The ETD Project is operated in conjunction with the Technology Base Research (TBR) Project, which is under the direction of Lawrence Berkeley Laboratory. Together these two projects seek to establish the scientific feasibility of advanced electrochemical energy storage systems, and conduct the initial engineering development on systems suitable for mobile and stationary commercial applications.

  16. Energy and Environmental Systems Division 1981 research review

    SciTech Connect

    Not Available

    1982-04-01

    To effectively manage the nation's energy and natural resources, government and industry leaders need accurate information regarding the performance and economics of advanced energy systems and the costs and benefits of public-sector initiatives. The Energy and Environmental Systems Division (EES) of Argonne National Laboratory conducts applied research and development programs that provide such information through systems analysis, geophysical field research, and engineering studies. During 1981, the division: analyzed the production economics of specific energy resources, such as biomass and tight sands gas; developed and transferred to industry economically efficient techniques for addressing energy-related resource management and environmental protection problems, such as the reclamation of strip-mined land; determined the engineering performance and cost of advanced energy-supply and pollution-control systems; analyzed future markets for district heating systems and other emerging energy technologies; determined, in strategic planning studies, the availability of resources needed for new energy technologies, such as the imported metals used in advanced electric-vehicle batteries; evaluated the effectiveness of strategies for reducing scarce-fuel consumption in the transportation sector; identified the costs and benefits of measures designed to stabilize the financial condition of US electric utilities; estimated the costs of nuclear reactor shutdowns and evaluated geologic conditions at potential sites for permanent underground storage of nuclear waste; evaluated the cost-effectiveness of environmental regulations, particularly those affecting coal combustion; and identified the environmental effects of energy technologies and transportation systems.

  17. Analytical Chemistry Division annual progress report for period ending December 31, 1988

    SciTech Connect

    Not Available

    1988-05-01

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: (1) Analytical Research, Development, and Implementation. The division maintains a program to conceptualize, investigate, develop, assess, improve, and implement advanced technology for chemical and physicochemical measurements. Emphasis is on problems and needs identified with ORNL and Department of Energy (DOE) programs; however, attention is also given to advancing the analytical sciences themselves. (2) Programmatic Research, Development, and Utilization. The division carries out a wide variety of chemical work that typically involves analytical research and/or development plus the utilization of analytical capabilities to expedite programmatic interests. (3) Technical Support. The division performs chemical and physicochemical analyses of virtually all types. The Analytical Chemistry Division is organized into four major sections, each of which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1988. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8.

  18. Isotope and Nuclear Chemistry Division annual report, FY 1983

    SciTech Connect

    Heiken, J.H.; Lindberg, H.A.

    1984-05-01

    This report describes progress in the major research and development programs carried out in FY 1983 by the Isotope and Nuclear Chemistry Division. It covers radiochemical diagnostics of weapons tests; weapons radiochemical diagnostics research and development; other unclassified weapons research; stable and radioactive isotope production, separation, and applications (including biomedical applications); element and isotope transport and fixation; actinide and transition metal chemistry; structural chemistry, spectroscopy, and applications; nuclear structure and reactions; irradiation facilities; advanced analytical techniques; development and applications; atmospheric chemistry and transport; and earth and planetary processes.

  19. Accelerator and Fusion Research Division: summary of activities, 1983

    SciTech Connect

    Not Available

    1984-08-01

    The activities described in this summary of the Accelerator and Fusion Research Division are diverse, yet united by a common theme: it is our purpose to explore technologically advanced techniques for the production, acceleration, or transport of high-energy beams. These beams may be the heavy ions of interest in nuclear science, medical research, and heavy-ion inertial-confinement fusion; they may be beams of deuterium and hydrogen atoms, used to heat and confine plasmas in magnetic fusion experiments; they may be ultrahigh-energy protons for the next high-energy hadron collider; or they may be high-brilliance, highly coherent, picosecond pulses of synchrotron radiation.

  20. Deepening Students' Understanding of Multiplication and Division by Exploring Divisibility by Nine

    ERIC Educational Resources Information Center

    Young-Loveridge, Jenny; Mills, Judith

    2012-01-01

    This article explores how a focus on understanding divisibility rules can be used to help deepen students' understanding of multiplication and division with whole numbers. It is based on research with seven Year 7-8 teachers who were observed teaching a group of students a rule for divisibility by nine. As part of the lesson, students were shown a…

  1. Moral Reasoning of Division III and Division I Athletes: Is There a Difference?

    ERIC Educational Resources Information Center

    Stoll, Sharon Kay; And Others

    This study sought to examine the potentially corrupting influences of media attention, money, and the accompanying stress on the moral reasoning of student athletes at both Division I and Division III National College Athletics Association (NCAA) schools. Subjects were 718 nonathletes and 277 randomly selected athletes at a Division I school and…

  2. 76 FR 4724 - Emerson Transportation Division, a Division of Emerson Electric, Including Workers Located...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ... Transportation Division, a division of Emerson Electric, Bridgeton, Missouri. The notice was published in the Federal Register on December 16, 2010 (75 FR 75701). At the request of a State of Arkansas agent, the... Division lived throughout the United States, including Arkansas, but report to the Bridgeton,...

  3. 25 CFR 213.29 - Division orders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Division orders. 213.29 Section 213.29 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF RESTRICTED LANDS OF MEMBERS OF FIVE CIVILIZED TRIBES, OKLAHOMA, FOR MINING Rents and Royalties § 213.29 Division orders. (a)...

  4. Teaching Cell Division: Basics and Recommendations.

    ERIC Educational Resources Information Center

    Smith, Mike U.; Kindfield, Ann C. H.

    1999-01-01

    Presents a concise overview of cell division that includes only the essential concepts necessary for understanding genetics and evolution. Makes recommendations based on published research and teaching experiences that can be used to judge the merits of potential activities and materials for teaching cell division. Makes suggestions regarding the…

  5. Polarized Cell Division of Chlamydia trachomatis.

    PubMed

    Abdelrahman, Yasser; Ouellette, Scot P; Belland, Robert J; Cox, John V

    2016-08-01

    Bacterial cell division predominantly occurs by a highly conserved process, termed binary fission, that requires the bacterial homologue of tubulin, FtsZ. Other mechanisms of bacterial cell division that are independent of FtsZ are rare. Although the obligate intracellular human pathogen Chlamydia trachomatis, the leading bacterial cause of sexually transmitted infections and trachoma, lacks FtsZ, it has been assumed to divide by binary fission. We show here that Chlamydia divides by a polarized cell division process similar to the budding process of a subset of the Planctomycetes that also lack FtsZ. Prior to cell division, the major outer-membrane protein of Chlamydia is restricted to one pole of the cell, and the nascent daughter cell emerges from this pole by an asymmetric expansion of the membrane. Components of the chlamydial cell division machinery accumulate at the site of polar growth prior to the initiation of asymmetric membrane expansion and inhibitors that disrupt the polarity of C. trachomatis prevent cell division. The polarized cell division of C. trachomatis is the result of the unipolar growth and FtsZ-independent fission of this coccoid organism. This mechanism of cell division has not been documented in other human bacterial pathogens suggesting the potential for developing Chlamydia-specific therapeutic treatments. PMID:27505160

  6. "American Gothic" and the Division of Labor.

    ERIC Educational Resources Information Center

    Saunders, Robert J.

    1987-01-01

    Provides historical review of gender-based division of labor. Argues that gender-based division of labor served a purpose in survival of tribal communities but has lost meaning today and may be a handicap to full use of human talent and ability in the arts. There is nothing in various art forms which make them more appropriate for males or…

  7. Friday's Agenda | Division of Cancer Prevention

    Cancer.gov

    TimeAgenda8:00 am - 8:10 amWelcome and Opening RemarksLeslie Ford, MDAssociate Director for Clinical ResearchDivision of Cancer Prevention, NCIEva Szabo, MD Chief, Lung and Upper Aerodigestive Cancer Research GroupDivision of Cancer Prevention, NCI8:10 am - 8:40 amClinical Trials Statistical Concepts for Non-Statisticians |

  8. Materials Sciences Division 1990 annual report

    SciTech Connect

    Not Available

    1990-12-31

    This report is the Materials Sciences Division`s annual report. It contains abstracts describing materials research at the National Center for Electron Microscopy, and for research groups in metallurgy, solid-state physics, materials chemistry, electrochemical energy storage, electronic materials, surface science and catalysis, ceramic science, high tc superconductivity, polymers, composites, and high performance metals.

  9. Polarized Cell Division of Chlamydia trachomatis

    PubMed Central

    Abdelrahman, Yasser; Ouellette, Scot P.; Belland, Robert J.; Cox, John V.

    2016-01-01

    Bacterial cell division predominantly occurs by a highly conserved process, termed binary fission, that requires the bacterial homologue of tubulin, FtsZ. Other mechanisms of bacterial cell division that are independent of FtsZ are rare. Although the obligate intracellular human pathogen Chlamydia trachomatis, the leading bacterial cause of sexually transmitted infections and trachoma, lacks FtsZ, it has been assumed to divide by binary fission. We show here that Chlamydia divides by a polarized cell division process similar to the budding process of a subset of the Planctomycetes that also lack FtsZ. Prior to cell division, the major outer-membrane protein of Chlamydia is restricted to one pole of the cell, and the nascent daughter cell emerges from this pole by an asymmetric expansion of the membrane. Components of the chlamydial cell division machinery accumulate at the site of polar growth prior to the initiation of asymmetric membrane expansion and inhibitors that disrupt the polarity of C. trachomatis prevent cell division. The polarized cell division of C. trachomatis is the result of the unipolar growth and FtsZ-independent fission of this coccoid organism. This mechanism of cell division has not been documented in other human bacterial pathogens suggesting the potential for developing Chlamydia-specific therapeutic treatments. PMID:27505160

  10. Research Networks Map | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention supports major scientific collaborations and research networks at more than 100 sites across the United States.  Five Major Programs' sites are shown on this map. | The Division of Cancer Prevention supports major scientific collaborations and research networks at more than 100 sites across the United States.

  11. New Study Designs | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention is expanding clinical research beyond standard trial designs to find interventions that may play a role in more than one prevalent disease. | The Division of Cancer Prevention is expanding clinical research beyond standard trial designs to find interventions that may play a role in more than one prevalent disease.

  12. Guide to the Division of Research Programs.

    ERIC Educational Resources Information Center

    National Endowment for the Humanities (NFAH), Washington, DC.

    This brief guide to the Research Programs Division of the National Endowment for the Humanities covers basic information, describes programs, and summarizes policies and procedures. An introductory section describes the division and its mission to encourage the development and dissemination of significant knowledge and scholarship in the…

  13. Cognitive and Neural Sciences Division, 1991 Programs.

    ERIC Educational Resources Information Center

    Vaughan, Willard S., Ed.

    This report documents research and development performed under the sponsorship of the Cognitive and Neural Sciences Division of the Office of Naval Research in fiscal year 1991. It provides abstracts (title, principal investigator, project code, objective, approach, progress, and related reports) of projects of three program divisions (cognitive…

  14. The Changing Nature of Division III Athletics

    ERIC Educational Resources Information Center

    Beaver, William

    2014-01-01

    Non-selective Division III institutions often face challenges in meeting their enrollment goals. To ensure their continued viability, these schools recruit large numbers of student athletes. As a result, when compared to FBS (Football Bowl Division) institutions these schools have a much higher percentage of student athletes on campus and a…

  15. Cognitive and Neural Sciences Division 1990 Programs.

    ERIC Educational Resources Information Center

    Vaughan, Willard S., Jr., Ed.

    Research and development efforts carried out under sponsorship of the Cognitive and Neural Sciences Division of the Office of Naval Research during fiscal year 1990 are described in this compilation of project description summaries. The Division's research is organized in three types of programs: (1) Cognitive Science (the human learner--cognitive…

  16. Gravity and the orientation of cell division

    NASA Technical Reports Server (NTRS)

    Helmstetter, C. E.

    1997-01-01

    A novel culture system for mammalian cells was used to investigate division orientations in populations of Chinese hamster ovary cells and the influence of gravity on the positioning of division axes. The cells were tethered to adhesive sites, smaller in diameter than a newborn cell, distributed over a nonadhesive substrate positioned vertically. The cells grew and divided while attached to the sites, and the angles and directions of elongation during anaphase, projected in the vertical plane, were found to be random with respect to gravity. However, consecutive divisions of individual cells were generally along the same axis or at 90 degrees to the previous division, with equal probability. Thus, successive divisions were restricted to orthogonal planes, but the choice of plane appeared to be random, unlike the ordered sequence of cleavage orientations seen during early embryo development.

  17. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  18. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  19. Advanced drilling systems study

    SciTech Connect

    Pierce, K.G.; Livesay, B.J.

    1995-03-01

    This work was initiated as part of the National Advanced Drilling and Excavation Technologies (NADET) Program. It is being performed through joint finding from the Department of Energy Geothermal Division and the Natural Gas Technology Branch, Morgantown Energy Technology Center. Interest in advanced drilling systems is high. The Geothermal Division of the Department of Energy has initiated a multi-year effort in the development of advanced drilling systems; the National Research Council completed a study of drilling and excavation technologies last year; and the MIT Energy Laboratory recently submitted a proposal for a national initiative in advanced drilling and excavation research. The primary reasons for this interest are financial. Worldwide expenditures on oil and gas drilling approach $75 billion per year. Also, drilling and well completion account for 25% to 50% of the cost of producing electricity from geothermal energy. There is incentive to search for methods to reduce the cost of drilling. Work on ideas to improve or replace rotary drilling technology dates back at least to the 1930`s. There was a significant amount of work in this area in the 1960`s and 1970`s; and there has been some continued effort through the 1980`s. Undoubtedly there are concepts for advanced drilling systems that have yet to be studied; however, it is almost certain that new efforts to initiate work on advanced drilling systems will build on an idea or a variation of an idea that has already been investigated. Therefore, a review of previous efforts coupled with a characterization of viable advanced drilling systems and the current state of technology as it applies to those systems provide the basis for the current study of advanced drilling.

  20. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Ethier, Stephane

    2007-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  1. Adventures in supercomputing, a K-12 program in computational science: An assessment

    SciTech Connect

    Oliver, C.E.; Hicks, H.R.; Iles-Brechak, K.D.; Honey, M.; McMillan, K.

    1994-10-01

    In this paper, the authors describe only those elements of the Department of Energy Adventures in Supercomputing (AiS) program for high school teachers, such as school selection, which have a direct bearing on assessment. Schools submit an application to participate in the AiS program. They propose a team of at least two teachers to implement the AiS curriculum. The applications are evaluated by selection committees in each of the five participating states to determine which schools are the most qualified to carry out the program and reach a significant number of women, minorities, and economically disadvantaged students, all of whom have historically been underrepresented in the sciences. Typically, selected schools either have a large disadvantaged student population, or the applying teachers propose specific means to attract these segments of their student body into AiS classes. Some areas with AiS schools have significant numbers of minority students, some have economically disadvantaged, usually rural, students, and all areas have the potential to reach a higher proportion of women than technical classes usually attract. This report presents preliminary findings based on three types of data: demographic, student journals, and contextual. Demographic information is obtained for both students and teachers. Students have been asked to maintain journals which include replies to specific questions that are posed each month. An analysis of the answers to these questions helps to form a picture of how students progress through the course of the school year. Onsite visits by assessment professionals conducting student and teacher interviews, provide a more in depth, qualitative basis for understanding student motivations.

  2. Influence of Earth crust composition on continental collision style in Precambrian conditions: Results of supercomputer modelling

    NASA Astrophysics Data System (ADS)

    Zavyalov, Sergey; Zakharov, Vladimir

    2016-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to

  3. 49 CFR 1242.03 - Made by accounting divisions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 9 2012-10-01 2012-10-01 false Made by accounting divisions. 1242.03 Section 1242... accounting divisions. The separation shall be made by accounting divisions, where such divisions are maintained, and the aggregate of the accounting divisions reported for the quarter and for the year....

  4. 49 CFR 1242.03 - Made by accounting divisions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Made by accounting divisions. 1242.03 Section 1242... accounting divisions. The separation shall be made by accounting divisions, where such divisions are maintained, and the aggregate of the accounting divisions reported for the quarter and for the year....

  5. [Division of winter wheat yield estimation by remote sensing based on MODIS EVI time series data and spectral angle clustering].

    PubMed

    Zhu, Zai-Chun; Chen, Lian-Qun; Zhang, Jin-Shui; Pan, Yao-Zhong; Zhu, Wen-Quan; Hu, Tan-Gao

    2012-07-01

    Crop yield estimation division is the basis of crop yield estimation; it provides an important scientific basis for estimation research and practice. In the paper, MODIS EVI time-series data during winter wheat growth period is selected as the division data; JiangSu province is study area; A division method combined of advanced spectral angle mapping(SVM) and K-means clustering is presented, and tested in winter wheat yield estimation by remote sensing. The results show that: division method of spectral angle clustering can take full advantage of crop growth process that is reflected by MODIS time series data, and can fully reflect region differences of winter wheat that is brought by climate difference. Compared with the traditional division method, yield estimation result based on division result of spectral angle clustering has higher R2 (0.702 6 than 0.624 8) and lower RMSE (343.34 than 381.34 kg x hm(-2)), reflecting the advantages of the new division method in the winter wheat yield estimation. The division method in the paper only use convenient obtaining time-series remote sensing data of low-resolution as division data, can divide winter wheat into similar and well characterized region, accuracy and stability of yield estimation model is also very good, which provides an efficient way for winter wheat estimation by remote sensing, and is conducive to winter wheat yield estimation. PMID:23016349

  6. Life Sciences Division progress report for CYs 1997-1998 [Oak Ridge National Laboratory

    SciTech Connect

    Mann, Reinhold C.

    1999-06-01

    mission of the division is to advance science and technology to understand complex biological systems and their relationship with human health and the environment.

  7. Water Reactor Safety Research Division quarterly progress report, January 1-March 31, 1980

    SciTech Connect

    Romano, A.J.

    1980-06-01

    The Water Reactor Safety Research Programs Quarterly Report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: LWR Thermal Hydraulic Development, Advanced Code Evaluation, TRAC Code Assessment, and Stress Corrosion Cracking of PWR Steam Generator Tubing.

  8. Water Reactor Safety Research Division. Quarterly progress report, April 1-June 30, 1980

    SciTech Connect

    Abuaf, N.; Levine, M.M.; Saha, P.; van Rooyen, D.

    1980-08-01

    The Water Reactor Safety Research Programs quarterly report describes current activities and technical progress in the programs at Brookhaven National Laboratory sponsored by the USNRC Division of Reactor Safety Research. The projects reported each quarter are the following: LWR Thermal Hydraulic Development, Advanced Code Evlauation, TRAC Code Assessment, and Stress Corrosion Cracking of PWR Steam Generator Tubing.

  9. Physics Division annual report, April 1, 1995--March 31, 1996

    SciTech Connect

    Thayer, K.J.

    1996-11-01

    The past year has seen several major advances in the Division`s research programs. In heavy-ion physics these include experiments with radioactive beams of interest to nuclear astrophysics, a first exploration of the structure of nuclei situated beyond the proton drip line, the discovery of new proton emitters--the heaviest known, the first unambiguous detection of discrete linking transitions between superdeformed and normal deformed states, and the impact of the APEX results which were the first to report, conclusively, no sign of the previously reported sharp electron positron sum lines. The medium energy nuclear physics program of the Division has led the first round of experiments at the CEBAF accelerator at the Thomas Jefferson National Accelerator Facility and the study of color transparency in rho meson propagation at the HERMES experiment at DESY, and it has established nuclear polarization in a laser driven polarized hydrogen target. In atomic physics, the non-dipolar contribution to photoionization has been quantitatively established for the first time, the atomic physics beamline at the Argonne 7 GeV Advanced Photon Source was constructed and, by now, first experiments have been successfully performed. The theory program has pushed exact many-body calculations with fully realistic interactions (the Argonne v{sub 18} potential) to the seven-nucleon system, and interesting results have been obtained for the structure of deformed nuclei through meanfield calculations and for the structure of baryons with QCD calculations based on the Dyson-Schwinger approach. Brief summaries are given of the individual research programs.

  10. 622-Mbps Orthogonal Frequency Division Multiplexing Modulator Developed

    NASA Technical Reports Server (NTRS)

    Nguyen, Na T.

    1999-01-01

    The Communications Technology Division at the NASA Lewis Research Center is developing advanced electronic technologies for the space communications and remote sensing systems of tomorrow. As part of the continuing effort to advance the state-of-the art in satellite communications and remote sensing systems, Lewis is developing a programmable Orthogonal Frequency Division Multiplexing (OFDM) modulator card for high-data-rate communication links. The OFDM modulator is particularly suited to high data-rate downlinks to ground terminals or direct data downlinks from near-Earth science platforms. It can support data rates up to 622 megabits per second (Mbps) and high-order modulation schemes such as 16-ary quadrature amplitude modulation (16-ary QAM) or 8- phase shift keying (8PSK). High order modulations can obtain the bandwidth efficiency over the traditional binary phase shift keying (BPSK) or quadrature phase shift keying (QPSK) modulator schemes. The OFDM modulator architecture can also be precompensated for channel disturbances and alleviate amplitude degradations caused by nonlinear transponder characteristics.

  11. Earth Sciences Division collected abstracts: 1979

    SciTech Connect

    Henry, A.L.; Schwartz, L.L.

    1980-04-30

    This report is a compilation of abstracts of papers, internal reports, and talks presented during 1979 at national and international meetings by members of the Earth Sciences Division, Lawrence Livermore Laboratory. The arrangement is alphabetical (by author). For a given report, a bibliographic reference appears under the name of each coauthor, but the abstract iself is given only under the name of the first author or the first Earth Sciences Division author. A topical index at the end of the report provides useful cross references, while indicating major areas of research interest in the Earth Sciences Division.

  12. Overview of the Applied Aerodynamics Division

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A major reorganization of the Aeronautics Directorate of the Langley Research Center occurred in early 1989. As a result of this reorganization, the scope of research in the Applied Aeronautics Division is now quite different than that in the past. An overview of the current organization, mission, and facilities of this division is presented. A summary of current research programs and sample highlights of recent research are also presented. This is intended to provide a general view of the scope and capabilities of the division.

  13. Asymmetric cell division in plant development.

    PubMed

    Heidstra, Renze

    2007-01-01

    Plant embryogenesis creates a seedling with a basic body plan. Post-embryonically the seedling elaborates with a lifelong ability to develop new tissues and organs. As a result asymmetric cell divisions serve essential roles during embryonic and postembryonic development to generate cell diversity. This review highlights selective cases of asymmetric division in the model plant Arabidopsis thaliana and describes the current knowledge on fate determinants and mechanisms involved. Common themes that emerge are: 1. role of the plant hormone auxin and its polar transport machinery; 2. a MAP kinase signaling cascade and; 3. asymmetric segregating transcription factors that are involved in several asymmetric cell divisions. PMID:17585494

  14. Biology and Medicine Division: Annual report 1986

    SciTech Connect

    Not Available

    1987-04-01

    The Biology and Medicine Division continues to make important contributions in scientific areas in which it has a long-established leadership role. For 50 years the Division has pioneered in the application of radioisotopes and charged particles to biology and medicine. There is a growing emphasis on cellular and molecular applications in the work of all the Division's research groups. The powerful tools of genetic engineering, the use of recombinant products, the analytical application of DNA probes, and the use of restriction fragment length polymorphic DNA are described and proposed for increasing use in the future.

  15. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1995-04-01

    Advanced mathematical techniques and computer simulation play a major role in providing enhanced understanding of conventional and advanced materials processing operations. Development and application of mathematical models and computer simulation techniques can provide a quantitative understanding of materials processes and will minimize the need for expensive and time consuming trial- and error-based product development. As computer simulations and materials databases grow in complexity, high performance computing and simulation are expected to play a key role in supporting the improvements required in advanced material syntheses and processing by lessening the dependence on expensive prototyping and re-tooling. Many of these numerical models are highly compute-intensive. It is not unusual for an analysis to require several hours of computational time on current supercomputers despite the simplicity of the models being studied. For example, to accurately simulate the heat transfer in a 1-m{sup 3} block using a simple computational method requires 10`2 arithmetic operations per second of simulated time. For a computer to do the simulation in real time would require a sustained computation rate 1000 times faster than that achievable by current supercomputers. Massively parallel computer systems, which combine several thousand processors able to operate concurrently on a problem are expected to provide orders of magnitude increase in performance. This paper briefly describes advanced computational research in materials processing at ORNL. Continued development of computational techniques and algorithms utilizing the massively parallel computers will allow the simulation of conventional and advanced materials processes in sufficient generality.

  16. 2. JL photographer, summer 1978. View from south of Division ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. JL photographer, summer 1978. View from south of Division Avenue Punting and Filtration plant. - Division Avenue Pumping Station & Filtration Plant, West 45th Street and Division Avenue, Cleveland, Cuyahoga County, OH

  17. Chemical and Laser Sciences Division annual report 1989

    SciTech Connect

    Haines, N.

    1990-06-01

    The Chemical and Laser Sciences Division Annual Report includes articles describing representative research and development activities within the Division, as well as major programs to which the Division makes significant contributions.

  18. 3. Oblique view of 215 Division Street, looking southeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Oblique view of 215 Division Street, looking southeast, showing rear (west) facade and north side, Fairbanks Company appears at left and 215 Division Street is visible at right - 215 Division Street (House), Rome, Floyd County, GA

  19. 2. Oblique view of 215 Division Street, looking northeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Oblique view of 215 Division Street, looking northeast, showing rear (west) facade and south side, 217 Division Street is visible at left and Fairbanks Company appears at right - 215 Division Street (House), Rome, Floyd County, GA

  20. 3. Oblique view of 213 Division Street, looking northeast, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Oblique view of 213 Division Street, looking northeast, showing rear (west) facade and south side, 215 Division Street is visible at left and Fairbanks Company appears at right - 213 Division Street (House), Rome, Floyd County, GA

  1. 6. Contextual view of Fairbanks Company, looking south along Division ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Contextual view of Fairbanks Company, looking south along Division Street, showing relationship of factory to surrounding area, 213, 215, & 217 Division Street appear on right side of street - Fairbanks Company, 202 Division Street, Rome, Floyd County, GA

  2. Towards real-time simulation of large space structures: Stabilization of fluid/thermal/structure interactions and implementation on high performance supercomputers

    NASA Technical Reports Server (NTRS)

    Farhat, C.

    1989-01-01

    Within the Center for Space Construction, the SIMSTRUC project's objectives center around the development of simulation tools for the realistic analysis of large space structures. The word 'tools' is the broad sense; it designates mathematical models, finite element/finite difference formulations, computational algorithms, implementations on advanced computer architectures, and visualization capabilities. The results of our activities during the first year within the SIMSTRUC project are reported. On the modeling side, an alternative approach to fluid/thermal/structure interaction analysis that is a departure from the 'loosely coupled' and 'unified' approaches that are being currently practiced are described. The advantages of our approach both in terms of accuracy and computational efficiency were demonstrated. On the computational side, a software architecture for parallel/vector and massively parallel supercomputers that speeds up finite element and finite difference computations by several orders of magnitude is presented. As an example, the simulation of the deployment of a space structure that used to require over six hours of a workstation using a conventional finite element software, now runs on a multiprocessor using a parallel computation strategy in less than three seconds. In order to promote the physical understanding of the simulation behavior, a real-time visualization capability on the Connection Machine, which allows the analyst to watch the graphical animation of the results at the same time these are generated, was also developed. It is believed that by combining efficient analytical formulations with the state-of-the-art high performance computer implementations and superfast visualization capabilities, SIMSTRUC is moving fast towards the real-time simulation of large space structures. The designers as well as the researchers will certainly benefit from this technology.

  3. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer

    PubMed Central

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8–128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·1010 connections (K is 1024, M is 10242, and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods—the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores—had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect

  4. About DCP | Division of Cancer Prevention

    Cancer.gov

    The Division of Cancer Prevention (DCP) is the primary unit of the National Cancer Institute devoted to cancer prevention research. DCP provides funding and administrative support to clinical and laboratory researchers, community and multidisciplinary teams, and collaborative scientific networks. |

  5. Chemical Sciences Division: Annual report 1992

    SciTech Connect

    Not Available

    1993-10-01

    The Chemical Sciences Division (CSD) is one of twelve research Divisions of the Lawrence Berkeley Laboratory, a Department of Energy National Laboratory. The CSD is composed of individual groups and research programs that are organized into five scientific areas: Chemical Physics, Inorganic/Organometallic Chemistry, Actinide Chemistry, Atomic Physics, and Physical Chemistry. This report describes progress by the CSD for 1992. Also included are remarks by the Division Director, a description of work for others (United States Office of Naval Research), and appendices of the Division personnel and an index of investigators. Research reports are grouped as Fundamental Interactions (Photochemical and Radiation Sciences, Chemical Physics, Atomic Physics) or Processes and Techniques (Chemical Energy, Heavy-Element Chemistry, and Chemical Engineering Sciences).

  6. Nanoengineering: Super symmetry in cell division

    NASA Astrophysics Data System (ADS)

    Huang, Kerwyn Casey

    2015-08-01

    Bacterial cells can be sculpted into different shapes using nanofabricated chambers and then used to explore the spatial adaptation of protein oscillations that play an important role in cell division.

  7. Division Xii: Union-Wide Activities

    NASA Astrophysics Data System (ADS)

    Trimble, Virginia L.; Andersen, Johannes; Aksnes, Kaare; Genova, Françoise; Gurshtein, Alexander A.; Johansson, Sveneric; Pasachoff, Jay M.; Smith, Malcolm G.

    2007-12-01

    Division XII consists of Commissions that formerly were organized under the Executive Committee, that concern astronomers across a wide range of scientific sub-disciplines and provide interactions with scientists in a wider community, including governmental organizations, outside the IAU.

  8. Division XII: Union-Wide Activities

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm G.; Genova, Françoise; Andersen, Johannes; Federman, Steven R.; Gilmore, Alan C.; Nha, Il-Seong; Norris, Raymond P.; Robson, Ian E.; Stavinschi, Magda G.; Trimble, Virginia L.; Wainscoat, Richard J.; Christensen, Lars Lindberg

    Division XII consists of Commissions that formerly were organized under the Executive Committee, that concern astronomers across a wide range of scientific sub-disciplines and provide interactions with scientists in a wider community, including governmental organizations, outside the IAU.

  9. Data communication requirements for the advanced NAS network

    NASA Technical Reports Server (NTRS)

    Levin, Eugene; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.

  10. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  11. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  12. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  13. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  14. 15 CFR 950.8 - Satellite Data Services Division (SDSD).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...., weather forecasting) have been satisfied. The division also provides photographs collected during NASA's... to: Satellite Data Services Division, World Weather Building, Room 606, Washington, DC 20233,...

  15. 3. Perspective view of Express Building looking northeast, with Division ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Perspective view of Express Building looking northeast, with Division Street in foreground - American Railway Express Company Freight Building, 1060 Northeast Division Street, Bend, Deschutes County, OR

  16. Energy Division annual progress report for period ending September 30, 1991

    SciTech Connect

    Stone, J.N.

    1992-04-01

    The Energy Division is one of 17 research divisions at Oak Ridge Laboratory. Its goals and accomplishments are described in this annual progress report for FY 1991. The division`s total expenditures in FY 1991 were $39.1 million. The work is supported by the US Department of Energy, US Department of Defense, many other federal agencies, and some private organizations. Disciplines of the 124 technical staff members include engineering, social sciences, physical and life sciences, and mathematics and statistics. The Energy Division`s programmatic activities focus on three major areas: (1) analysis and assessment, (2) energy conservation technologies, and (3) military transportation systems. Analysis and assessment activities cover energy and resource analysis, the preparation of environmental assessments and impact statements, research on waste management, analysis of emergency preparedness for natural and technological disasters, analysis of the energy and environmental needs of developing countries, technology transfer, and analysis of civilian transportation. Energy conservation technologies include electric power systems, building equipment (thermally activated heat pumps, advanced refrigeration systems, novel cycles), building envelopes (walls, foundations, roofs, attics, and materials), and technical issues for improving energy efficiency in existing buildings. Military transportation systems concentrate on research for sponsors within the US military on improving the efficiency of military deployment, scheduling, and transportation coordination.

  17. Nuclear Science Division: 1993 Annual report

    SciTech Connect

    Myers, W.D.

    1994-06-01

    This report describes the activities of the Nuclear Science Division for the 1993 calendar year. This was another significant year in the history of the Division with many interesting and important accomplishments. Activities for the following programs are covered here: (1) nuclear structure and reactions program; (2) the Institute for Nuclear and Particle Astrophysics; (3) relativistic nuclear collisions program; (4) nuclear theory program; (5) nuclear data evaluation program, isotope project; and (6) 88-inch cyclotron operations.

  18. Earth Sciences Division annual report 1989

    SciTech Connect

    Not Available

    1990-06-01

    This Annual Report presents summaries of selected representative research activities from Lawrence Berkeley Laboratory grouped according to the principal disciplines of the Earth Sciences Division: Reservoir Engineering and Hydrology, Geology and Geochemistry, and Geophysics and Geomechanics. We are proud to be able to bring you this report, which we hope will convey not only a description of the Division's scientific activities but also a sense of the enthusiasm and excitement present today in the Earth Sciences.

  19. Multivariate linear recurrences and power series division

    PubMed Central

    Hauser, Herwig; Koutschan, Christoph

    2012-01-01

    Bousquet-Mélou and Petkovšek investigated the generating functions of multivariate linear recurrences with constant coefficients. We will give a reinterpretation of their results by means of division theorems for formal power series, which clarifies the structural background and provides short, conceptual proofs. In addition, extending the division to the context of differential operators, the case of recurrences with polynomial coefficients can be treated in an analogous way. PMID:23482936

  20. Division II: Commission 10: Solar Activity

    NASA Astrophysics Data System (ADS)

    van Driel-Gesztelyi, Lidia; Scrijver, Karel J.; Klimchuk, James A.; Charbonneau, Paul; Fletcher, Lyndsay; Hasan, S. Sirajul; Hudson, Hugh S.; Kusano, Kanya; Mandrini, Cristina H.; Peter, Hardi; Vršnak, Bojan; Yan, Yihua

    2015-08-01

    The Business Meeting of Commission 10 was held as part of the Business Meeting of Division II (Sun and Heliosphere), chaired by Valentin Martínez-Pillet, the President of the Division. The President of Commission 10 (C10; Solar activity), Lidia van Driel-Gesztelyi, took the chair for the business meeting of C10. She summarised the activities of C10 over the triennium and the election of the incoming OC.

  1. Earth Sciences Division collected abstracts: 1980

    SciTech Connect

    Henry, A.L.; Hornady, B.F.

    1981-10-15

    This report is a compilation of abstracts of papers, reports, and talks presented during 1980 at national and international meetings by members of the Earth Sciences Division, Lawrence Livermore National Laboratory. The arrangement is alphabetical (by author). For a given report, a bibliographic reference appears under the name of each coauthor, but the abstract itself is given only under the name of the first author (indicated in capital letters) or the first Earth Sciences Division author.

  2. Weapons Experiments Division Explosives Operations Overview

    SciTech Connect

    Laintz, Kenneth E.

    2012-06-19

    Presentation covers WX Division programmatic operations with a focus on JOWOG-9 interests. A brief look at DARHT is followed by a high level overview of explosives research activities currently being conducted within in the experimental groups of WX-Division. Presentation covers more emphasis of activities and facilities at TA-9 as these efforts have been more traditionally aligned with ongoing collaborative explosive exchanges covered under JOWOG-9.

  3. Medical Sciences Division report for 1993

    SciTech Connect

    Not Available

    1993-12-31

    This year`s Medical Sciences Division (MSD) Report is organized to show how programs in our division contribute to the core competencies of Oak Ridge Institute for Science and Education (ORISE). ORISE`s core competencies in education and training, environmental and safety evaluation and analysis, occupational and environmental health, and enabling research support the overall mission of the US Department of Energy (DOE).

  4. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Xiang, Fang; Ningqiu, Li; Xiaozhe, Fu; Kaibin, Li; Qiang, Lin; Lihui, Liu; Cunbin, Shi; Shuqin, Wu

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects. PMID:26351170

  5. Solid State Division progress report for period ending September 30, 1993

    SciTech Connect

    Green, P.H.; Hinton, L.W.

    1994-08-01

    This report covers research progress in the Solid State Division from April 1, 1992, to September 30, 1993. During this period, the division conducted a broad, interdisciplinary materials research program with emphasis on theoretical solid state physics, neutron scattering, synthesis and characterization of materials, ion beam and laser processing, and the structure of solids and surfaces. This research effort was enhanced by new capabilities in atomic-scale materials characterization, new emphasis on the synthesis and processing of materials, and increased partnering with industry and universities. The theoretical effort included a broad range of analytical studies, as well as a new emphasis on numerical simulation stimulated by advances in high-performance computing and by strong interest in related division experimental programs. Superconductivity research continued to advance on a broad front from fundamental mechanisms of high-temperature superconductivity to the development of new materials and processing techniques. The Neutron Scattering Program was characterized by a strong scientific user program and growing diversity represented by new initiatives in complex fluids and residual stress. The national emphasis on materials synthesis and processing was mirrored in division research programs in thin-film processing, surface modification, and crystal growth. Research on advanced processing techniques such as laser ablation, ion implantation, and plasma processing was complemented by strong programs in the characterization of materials and surfaces including ultrahigh resolution scanning transmission electron microscopy, atomic-resolution chemical analysis, synchrotron x-ray research, and scanning tunneling microscopy.

  6. Environmental Sciences Division. Annual progress report for period ending September 30, 1980. [Lead abstract

    SciTech Connect

    Auerbach, S.I.; Reichle, D.E.

    1981-03-01

    Research conducted in the Environmental Sciences Division for the Fiscal Year 1980 included studies carried out in the following Division programs and sections: (1) Advanced Fossil Energy Program, (2) Nuclear Program, (3) Environmental Impact Program, (4) Ecosystem Studies Program, (5) Low-Level Waste Research and Development Program, (6) National Low-Level Waste Program, (7) Aquatic Ecology Section, (8) Environmental Resources Section, (9) Earth Sciences Section, and (10) Terrestrial Ecology Section. In addition, Educational Activities and the dedication of the Oak Ridge National Environmental Research Park are reported. Separate abstracts were prepared for the 10 sections of this report.

  7. Chemistry Division annual progress report for period ending April 30, 1993

    SciTech Connect

    Poutsma, M.L.; Ferris, L.M.; Mesmer, R.E.

    1993-08-01

    The Chemistry Division conducts basic and applied chemical research on projects important to DOE`s missions in sciences, energy technologies, advanced materials, and waste management/environmental restoration; it also conducts complementary research for other sponsors. The research are arranged according to: coal chemistry, aqueous chemistry at high temperatures and pressures, geochemistry, chemistry of advanced inorganic materials, structure and dynamics of advanced polymeric materials, chemistry of transuranium elements and compounds, chemical and structural principles in solvent extraction, surface science related to heterogeneous catalysis, photolytic transformations of hazardous organics, DNA sequencing and mapping, and special topics.

  8. Energy Technology Division research summary - 1999.

    SciTech Connect

    1999-03-31

    The Energy Technology Division provides materials and engineering technology support to a wide range of programs important to the US Department of Energy. As shown on the preceding page, the Division is organized into ten sections, five with concentrations in the materials area and five in engineering technology. Materials expertise includes fabrication, mechanical properties, corrosion, friction and lubrication, and irradiation effects. Our major engineering strengths are in heat and mass flow, sensors and instrumentation, nondestructive testing, transportation, and electromechanics and superconductivity applications. The Division Safety Coordinator, Environmental Compliance Officers, Quality Assurance Representative, Financial Administrator, and Communication Coordinator report directly to the Division Director. The Division Director is personally responsible for cultural diversity and is a member of the Laboratory-wide Cultural Diversity Advisory Committee. The Division's capabilities are generally applied to issues associated with energy production, transportation, utilization, or conservation, or with environmental issues linked to energy. As shown in the organization chart on the next page, the Division reports administratively to the Associate Laboratory Director (ALD) for Energy and Environmental Science and Technology (EEST) through the General Manager for Environmental and Industrial Technologies. While most of our programs are under the purview of the EEST ALD, we also have had programs funded under every one of the ALDs. Some of our research in superconductivity is funded through the Physical Research Program ALD. We also continue to work on a number of nuclear-energy-related programs under the ALD for Engineering Research. Detailed descriptions of our programs on a section-by-section basis are provided in the remainder of this book.

  9. The History of Metals and Ceramics Division

    SciTech Connect

    Craig, D.F.

    1999-01-01

    The division was formed in 1946 at the suggestion of Dr. Eugene P. Wigner to attack the problem of the distortion of graphite in the early reactors due to exposure to reactor neutrons, and the consequent radiation damage. It was called the Metallurgy Division and assembled the metallurgical and solid state physics activities of the time which were not directly related to nuclear weapons production. William A. Johnson, a Westinghouse employee, was named Division Director in 1946. In 1949 he was replaced by John H Frye Jr. when the Division consisted of 45 people. He was director during most of what is called the Reactor Project Years until 1973 and his retirement. During this period the Division evolved into three organizational areas: basic research, applied research in nuclear reactor materials, and reactor programs directly related to a specific reactor(s) being designed or built. The Division (Metals and Ceramics) consisted of 204 staff members in 1973 when James R. Weir, Jr., became Director. This was the period of the oil embargo, the formation of the Energy Research and Development Administration (ERDA) by combining the Atomic Energy Commission (AEC) with the Office of Coal Research, and subsequent formation of the Department of Energy (DOE). The diversification process continued when James O. Stiegler became Director in 1984, partially as a result of the pressure of legislation encouraging the national laboratories to work with U.S. industries on their problems. During that time the Division staff grew from 265 to 330. Douglas F. Craig became Director in 1992.

  10. Life Sciences Division Spaceflight Hardware

    NASA Technical Reports Server (NTRS)

    Yost, B.

    1999-01-01

    The Ames Research Center (ARC) is responsible for the development, integration, and operation of non-human life sciences payloads in support of NASA's Gravitational Biology and Ecology (GB&E) program. To help stimulate discussion and interest in the development and application of novel technologies for incorporation within non-human life sciences experiment systems, three hardware system models will be displayed with associated graphics/text explanations. First, an Animal Enclosure Model (AEM) will be shown to communicate the nature and types of constraints physiological researchers must deal with during manned space flight experiments using rodent specimens. Second, a model of the Modular Cultivation System (MCS) under development by ESA will be presented to highlight technologies that may benefit cell-based research, including advanced imaging technologies. Finally, subsystems of the Cell Culture Unit (CCU) in development by ARC will also be shown. A discussion will be provided on candidate technology requirements in the areas of specimen environmental control, biotelemetry, telescience and telerobotics, and in situ analytical techniques and imaging. In addition, an overview of the Center for Gravitational Biology Research facilities will be provided.

  11. Massively parallel simulation with DOE's ASCI supercomputers : an overview of the Los Alamos Crestone project

    SciTech Connect

    Weaver, R. P.; Gittings, M. L.

    2004-01-01

    The Los Alamos Crestone Project is part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative, or ASCI Program. The main goal of this software development project is to investigate the use of continuous adaptive mesh refinement (CAMR) techniques for application to problems of interest to the Laboratory. There are many code development efforts in the Crestone Project, both unclassified and classified codes. In this overview I will discuss the unclassified SAGE and the RAGE codes. The SAGE (SAIC adaptive grid Eulerian) code is a one-, two-, and three-dimensional multimaterial Eulerian massively parallel hydrodynamics code for use in solving a variety of high-deformation flow problems. The RAGE CAMR code is built from the SAGE code by adding various radiation packages, improved setup utilities and graphics packages and is used for problems in which radiation transport of energy is important. The goal of these massively-parallel versions of the codes is to run extremely large problems in a reasonable amount of calendar time. Our target is scalable performance to {approx}10,000 processors on a 1 billion CAMR computational cell problem that requires hundreds of variables per cell, multiple physics packages (e.g. radiation and hydrodynamics), and implicit matrix solves for each cycle. A general description of the RAGE code has been published in [l],[ 2], [3] and [4]. Currently, the largest simulations we do are three-dimensional, using around 500 million computation cells and running for literally months of calendar time using {approx}2000 processors. Current ASCI platforms range from several 3-teraOPS supercomputers to one 12-teraOPS machine at Lawrence Livermore National Laboratory, the White machine, and one 20-teraOPS machine installed at Los Alamos, the Q machine. Each machine is a system comprised of many component parts that must perform in unity for the successful run of these simulations. Key features of any massively parallel system

  12. Parkin suppresses Drp1-independent mitochondrial division.

    PubMed

    Roy, Madhuparna; Itoh, Kie; Iijima, Miho; Sesaki, Hiromi

    2016-07-01

    The cycle of mitochondrial division and fusion disconnect and reconnect individual mitochondria in cells to remodel this energy-producing organelle. Although dynamin-related protein 1 (Drp1) plays a major role in mitochondrial division in cells, a reduced level of mitochondrial division still persists even in the absence of Drp1. It is unknown how much Drp1-mediated mitochondrial division accounts for the connectivity of mitochondria. The role of a Parkinson's disease-associated protein-parkin, which biochemically and genetically interacts with Drp1-in mitochondrial connectivity also remains poorly understood. Here, we quantified the number and connectivity of mitochondria using mitochondria-targeted photoactivatable GFP in cells. We show that the loss of Drp1 increases the connectivity of mitochondria by 15-fold in mouse embryonic fibroblasts (MEFs). While a single loss of parkin does not affect the connectivity of mitochondria, the connectivity of mitochondria significantly decreased compared with a single loss of Drp1 when parkin was lost in the absence of Drp1. Furthermore, the loss of parkin decreased the frequency of depolarization of the mitochondrial inner membrane that is caused by increased mitochondrial connectivity in Drp1-knockout MEFs. Therefore, our data suggest that parkin negatively regulates Drp1-indendent mitochondrial division. PMID:27181353

  13. Control of apoptosis by asymmetric cell division.

    PubMed

    Hatzold, Julia; Conradt, Barbara

    2008-04-01

    Asymmetric cell division and apoptosis (programmed cell death) are two fundamental processes that are important for the development and function of multicellular organisms. We have found that the processes of asymmetric cell division and apoptosis can be functionally linked. Specifically, we show that asymmetric cell division in the nematode Caenorhabditis elegans is mediated by a pathway involving three genes, dnj-11 MIDA1, ces-2 HLF, and ces-1 Snail, that directly control the enzymatic machinery responsible for apoptosis. Interestingly, the MIDA1-like protein GlsA of the alga Volvox carteri, as well as the Snail-related proteins Snail, Escargot, and Worniu of Drosophila melanogaster, have previously been implicated in asymmetric cell division. Therefore, C. elegans dnj-11 MIDA1, ces-2 HLF, and ces-1 Snail may be components of a pathway involved in asymmetric cell division that is conserved throughout the plant and animal kingdoms. Furthermore, based on our results, we propose that this pathway directly controls the apoptotic fate in C. elegans, and possibly other animals as well. PMID:18399720

  14. Energy Technology Division research summary -- 1994

    SciTech Connect

    Not Available

    1994-09-01

    Research funded primarily by the NRC is directed toward assessing the roles of cyclic fatigue, intergranular stress corrosion cracking, and irradiation-assisted stress corrosion cracking on failures in light water reactor (LWR) piping systems, pressure vessels, and various core components. In support of the fast reactor program, the Division has responsibility for fuel-performance modeling and irradiation testing. The Division has major responsibilities in several design areas of the proposed International Thermonuclear Experimental Reactor (ITER). The Division supports the DOE in ensuring safe shipment of nuclear materials by providing extensive review of the Safety Analysis Reports for Packaging (SARPs). Finally, in the nuclear area they are investigating the safe disposal of spent fuel and waste. In work funded by DOE`s Energy Efficiency and Renewable Energy, the high-temperature superconductivity program continues to be a major focal point for industrial interactions. Coatings and lubricants developed in the division`s Tribology Section are intended for use in transportation systems of the future. Continuous fiber ceramic composites are being developed for high-performance heat engines. Nondestructive testing techniques are being developed to evaluate fiber distribution and to detect flaws. A wide variety of coatings for corrosion protection of metal alloys are being studied. These can increase lifetimes significant in a wide variety of coal combustion and gasification environments.

  15. Physics Division progress report, January 1, 1984-September 30, 1986

    SciTech Connect

    Keller, W.E.

    1987-10-01

    This report provides brief accounts of significant progress in development activities and research results achieved by Physics Division personnel during the period January 1, 1984, through September 31, 1986. These efforts are representative of the three main areas of experimental research and development in which the Physics Division serves Los Alamos National Laboratory's and the Nation's needs in defense and basic sciences: (1) defense physics, including the development of diagnostic methods for weapons tests, weapon-related high-energy-density physics, and programs supporting the Strategic Defense Initiative; (2) laser physics and applications, especially to high-density plasmas; and (3) fundamental research in nuclear and particle physics, condensed-matter physics, and biophysics. Throughout the report, emphasis is placed on the design, construction, and application of a variety of advanced, often unique, instruments and instrument systems that maintain the Division's position at the leading edge of research and development in the specific fields germane to its mission. A sampling of experimental systems of particular interest would include the relativistic electron-beam accelerator and its applications to high-energy-density plasmas; pulsed-power facilities; directed energy weapon devices such as free-electron lasers and neutral-particle-beam accelerators; high-intensity ultraviolet and x-ray beam lines at the National Synchrotron Light Source (at Brookhaven National Laboratory); the Aurora KrF ultraviolet laser system for projected use as an inertial fusion driver; antiproton physics facility at CERN; and several beam developments at the Los Alamos Meson Physics Facility for studying nuclear, condensed-matter, and biological physics, highlighted by progress in establishing the Los Alamos Neutron Scattering Center.

  16. 75 FR 16843 - Core Manufacturing, Multi-Plastics, Inc., Division, Sipco, Inc., Division, Including Leased...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ... Notice was published in the Federal Register on January 25, 2010 (75 FR 3935). After the certification... Employment and Training Administration Core Manufacturing, Multi-Plastics, Inc., Division, Sipco, Inc..., 2009, applicable to workers of Core Manufacturing, Multi-Plastics, Inc., Division and Sipco,...

  17. Section III, Division 5 - Development And Future Directions

    SciTech Connect

    Morton, Dana K.; Jetter, Robert I; Nestell, James E.; Burchell, Timothy D; Sham, Sam

    2012-01-01

    This paper provides commentary on a new division under Section III of the ASME Boiler and Pressure Vessel (BPV) Code. This new Division 5 has an issuance date of November 1, 2011 and is part of the 2011 Addenda to the 2010 Edition of the BPV Code. The new Division covers the rules for the design, fabrication, inspection and testing of components for high temperature nuclear reactors. Information is provided on the scope and need for Division 5, the structure of Division 5, where the rules originated, the various changes made in finalizing Division 5, and the future near-term and long-term expectations for Division 5 development.

  18. Synthetic cell division system: Controlling equal vs. unequal divisions by design

    PubMed Central

    Sato, Yoichi; Yasuhara, Kazuma; Kikuchi, Jun-ichi; Sato, Thomas N.

    2013-01-01

    Cell division is one of the most fundamental and evolutionarily conserved biological processes. Here, we report a synthetic system where we can control by design equal vs. unequal divisions. We synthesized a micro-scale inverse amphipathic droplet of which division is triggered by the increase of surface to volume ratio. Using this system, we succeeded in selectively inducing equal vs. unequal divisions of the droplet cells by adjusting the temperature or the viscosity of the solvent outside the droplet cell accordingly. Our synthetic division system may provide a platform for further development to a system where intracellular contents of the parent droplet cell could be divided into various ratios between the two daughter droplet cells to control their functions and fates. PMID:24327069

  19. On-chip multiplexing conversion between wavelength division multiplexing-polarization division multiplexing and wavelength division multiplexing-mode division multiplexing.

    PubMed

    Ye, Mengyuan; Yu, Yu; Zou, Jinghui; Yang, Weili; Zhang, Xinliang

    2014-02-15

    A compact silicon-on-insulator device used for conversions between polarization division multiplexing (PDM) and mode division multiplexing (MDM) signals is proposed and experimentally demonstrated by utilizing a structure combining the improved two-dimensional grating coupler and two-mode multiplexer. The detailed design of the proposed device is presented and the results show the extinction ratio of 16 and 20 dB for X- and Y-pol input, respectively. The processing of 40  Gb/s signal is achieved within the C-band with good performance. The proposed converter is capable of handling multiple wavelengths in wavelength division multiplexing (WDM) networks, enabling the conversions between WDM-PDM and WDM-MDM, which is promising to further increase the throughput at the network interface. PMID:24562199

  20. The Astrophysics Science Division Annual Report 2008

    NASA Technical Reports Server (NTRS)

    Oegerle, William; Reddy, Francis; Tyler, Pat

    2009-01-01

    The Astrophysics Science Division (ASD) at Goddard Space Flight Center (GSFC) is one of the largest and most diverse astrophysical organizations in the world, with activities spanning a broad range of topics in theory, observation, and mission and technology development. Scientific research is carried out over the entire electromagnetic spectrum from gamma rays to radio wavelengths as well as particle physics and gravitational radiation. Members of ASD also provide the scientific operations for three orbiting astrophysics missions WMAP, RXTE, and Swift, as well as the Science Support Center for the Fermi Gamma-ray Space Telescope. A number of key technologies for future missions are also under development in the Division, including X-ray mirrors, and new detectors operating at gamma-ray, X-ray, ultraviolet, infrared, and radio wavelengths. This report includes the Division's activities during 2008.