Kriging for Spatial-Temporal Data on the Bridges Supercomputer
NASA Astrophysics Data System (ADS)
Hodgess, E. M.
2017-12-01
Currently, kriging of spatial-temporal data is slow and limited to relatively small vector sizes. We have developed a method on the Bridges supercomputer, at the Pittsburgh supercomputer center, which uses a combination of the tools R, Fortran, the Message Passage Interface (MPI), OpenACC, and special R packages for big data. This combination of tools now permits us to complete tasks which could previously not be completed, or takes literally hours to complete. We ran simulation studies from a laptop against the supercomputer. We also look at "real world" data sets, such as the Irish wind data, and some weather data. We compare the timings. We note that the timings are suprising good.
High performance computing for advanced modeling and simulation of materials
NASA Astrophysics Data System (ADS)
Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang
2017-02-01
The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.
Edison - A New Cray Supercomputer Advances Discovery at NERSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy
2014-02-06
When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.
Edison - A New Cray Supercomputer Advances Discovery at NERSC
Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie
2018-01-16
When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.
Optimization of Supercomputer Use on EADS II System
NASA Technical Reports Server (NTRS)
Ahmed, Ardsher
1998-01-01
The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.
NSF Establishes First Four National Supercomputer Centers.
ERIC Educational Resources Information Center
Lepkowski, Wil
1985-01-01
The National Science Foundation (NSF) has awarded support for supercomputer centers at Cornell University, Princeton University, University of California (San Diego), and University of Illinois. These centers are to be the nucleus of a national academic network for use by scientists and engineers throughout the United States. (DH)
VIEW LOOKING NORTHEAST WITH OPEN HEARTH TO THE LEFT, PITTSBURGH ...
VIEW LOOKING NORTHEAST WITH OPEN HEARTH TO THE LEFT, PITTSBURGH & LAKE ERIE RAILROAD TRACKS CENTER. - Pittsburgh Steel Company, Monessen Works, Open Hearth Plant, Donner Avenue, Monessen, Westmoreland County, PA
2012-04-20
Observational Cosmology , NASA Goddard Space Flight Center, Code 665, Greenbelt, MD 20771, USA 31 Enrico Fermi Institute, Department of Physics, and Kavli...Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637, USA 32 Department of Physics and Astronomy, Rutgers, the State University...Austin, TX 78712, USA 59 Pittsburgh Particle Physics, Astrophysics, and Cosmology Center (Pitt-PACC), University of Pittsburgh, Pittsburgh, PA 15260, USA
Model for a patient-centered comparative effectiveness research center.
Costlow, Monica R; Landsittel, Douglas P; James, A Everette; Kahn, Jeremy M; Morton, Sally C
2015-04-01
This special report describes the systematic approach the University of Pittsburgh and the University of Pittsburgh Medical Center (UPMC) undertook in creating an infrastructure for comparative effectiveness and patient-centered outcomes research resources. We specifically highlight the administrative structure, communication and training opportunities, stakeholder engagement resources, and support services offered. © 2015 Wiley Periodicals, Inc.
Breakthrough: NETL's Simulation-Based Engineering User Center (SBEUC)
Guenther, Chris
2018-05-23
The National Energy Technology Laboratory relies on supercomputers to develop many novel ideas that become tomorrow's energy solutions. Supercomputers provide a cost-effective, efficient platform for research and usher technologies into widespread use faster to bring benefits to the nation. In 2013, Secretary of Energy Dr. Ernest Moniz dedicated NETL's new supercomputer, the Simulation Based Engineering User Center, or SBEUC. The SBEUC is dedicated to fossil energy research and is a collaborative tool for all of NETL and our regional university partners.
Breakthrough: NETL's Simulation-Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guenther, Chris
The National Energy Technology Laboratory relies on supercomputers to develop many novel ideas that become tomorrow's energy solutions. Supercomputers provide a cost-effective, efficient platform for research and usher technologies into widespread use faster to bring benefits to the nation. In 2013, Secretary of Energy Dr. Ernest Moniz dedicated NETL's new supercomputer, the Simulation Based Engineering User Center, or SBEUC. The SBEUC is dedicated to fossil energy research and is a collaborative tool for all of NETL and our regional university partners.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.
Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly
2015-10-06
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania
NASA Astrophysics Data System (ADS)
Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.
2015-12-01
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
Library Services in a Supercomputer Center.
ERIC Educational Resources Information Center
Layman, Mary
1991-01-01
Describes library services that are offered at the San Diego Supercomputer Center (SDSC), which is located at the University of California at San Diego. Topics discussed include the user population; online searching; microcomputer use; electronic networks; current awareness programs; library catalogs; and the slide collection. A sidebar outlines…
Bug Distribution and Pattern Classification.
1985-07-15
Center Educational Psychology University of Leyden San Diego. CA 92152 Urbana. IL 61801 Education Research Center oernaavelaan 2 Dr. Erling B. Andersen...Dr. Dattprasad rlivgi 23314 EN Leyden Department of Statistics Syracuse University The NETHERLANDS 3tudiestraede 6 Department of Psychology 1455...Rebecca Hetter Learning R&D Center Navy Personnel R&D Center University of Pittsburgh Ms. Kathleen Moreno Code 62 Pittsburgh, PA 15260 Navy Personnel R
The Center on Race and Social Problems at the University of Pittsburgh
ERIC Educational Resources Information Center
Davis, Larry E.; Bangs, Ralph L.
2007-01-01
In 2002, the School of Social Work at the University of Pittsburgh established the Center on Race and Social Problems (CRSP). CRSP, which is the first race research center to be housed in a school of social work, has six foci: economic disparities; educational disparities; interracial group relations; mental health; youth, families, and elderly;…
NSF Says It Will Support Supercomputer Centers in California and Illinois.
ERIC Educational Resources Information Center
Strosnider, Kim; Young, Jeffrey R.
1997-01-01
The National Science Foundation will increase support for supercomputer centers at the University of California, San Diego and the University of Illinois, Urbana-Champaign, while leaving unclear the status of the program at Cornell University (New York) and a cooperative Carnegie-Mellon University (Pennsylvania) and University of Pittsburgh…
NSF Commits to Supercomputers.
ERIC Educational Resources Information Center
Waldrop, M. Mitchell
1985-01-01
The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)
TOP500 Supercomputers for November 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
2003-11-16
22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.
Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data
NASA Astrophysics Data System (ADS)
Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.
2018-03-01
One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.
2017-12-08
Two rows of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS) contain more than 4,000 computer processors. Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
2017-12-08
This close-up view highlights one row—approximately 2,000 computer processors—of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS). Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
Automatic discovery of the communication network topology for building a supercomputer model
NASA Astrophysics Data System (ADS)
Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim
2016-10-01
The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.
Scientific Visualization in High Speed Network Environments
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kutler, Paul (Technical Monitor)
1997-01-01
In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
2017-12-08
The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
2017-12-08
The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
2017-12-08
The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
Modelling sodium cobaltate by mapping onto magnetic Ising model
NASA Astrophysics Data System (ADS)
Gemperline, Patrick; Morris, David Jonathan Pryce
Fast Ion conductors are a class of crystals that are frequently used as battery materials, especially in smart phones, laptops, and other portable devices. Sodium Cobalt Oxide, NaxCoO2, falls into this class of crystals, but is unique because it possesses the ability to act as a thermoelectric material and a superconductor at different concentrations of Na+. The crystal lattice is mapped onto an Ising Magnetic Spin model and a Monte-Carol Simulation is used to find the most energetically favorable configuration of spins. This spin configuration is mapped back to the crystal lattice resulting in the most stable crystal structure of Sodium Cobalt Oxide at various concentrations. Knowing the atomic structures of the crystals will aid in the research of the materials capabilities and the possible uses of the material commercially. Ohio Supercomputer Center. 1987. Ohio Supercomputer Center. Columbus OH: Ohio Supercomputer Center. and the John Hauck Foundation.
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
Automated Help System For A Supercomputer
NASA Technical Reports Server (NTRS)
Callas, George P.; Schulbach, Catherine H.; Younkin, Michael
1994-01-01
Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.
Reduced-Order Modeling for Optimization and Control of Complex Flows
2010-11-30
Statistics Colloquium, Auburn, AL, (January 2009). 16. University of Pittsburgh, Mathematics Colloquium, Pittsburgh, PA, (February 2009). 17. Goethe ...Center for Scientific Computing, Goethe University Frankfurt am Main, Ger- many, (June 2009). 18. Air Force Institute of Technology, Wright-Patterson
Variable Selection in Logistic Regression.
1987-06-01
23 %. AUTIOR(.) S. CONTRACT OR GRANT NUMBE Rf.i %Z. D. Bai, P. R. Krishnaiah and . C. Zhao F49620-85- C-0008 " PERFORMING ORGANIZATION NAME AND AOORESS...d I7 IOK-TK- d 7 -I0 7’ VARIABLE SELECTION IN LOGISTIC REGRESSION Z. D. Bai, P. R. Krishnaiah and L. C. Zhao Center for Multivariate Analysis...University of Pittsburgh Center for Multivariate Analysis University of Pittsburgh Y !I VARIABLE SELECTION IN LOGISTIC REGRESSION Z- 0. Bai, P. R. Krishnaiah
NASA Astrophysics Data System (ADS)
Sandford, Scott
The Antarctic Search for Meteorites program (ANSMET), under the overall direction of W. A. Cassidy (University of Pittsburgh, Pittsburgh, Pa.), continued its work of past years by conducting an expedition to southern Victoria Land during the 1984-1985 austral summer. Party members included Cassidy, Catherine King-Frazier (James Madison University, Harrisonburg, Va.), Scott Sandford (Washington University, St. Louis, Mo.), John Schutt (University of Pittsburgh), Roberta Score (National Aeronautics and Space Administration/Johnson Space Center, Houston, Tex.), Carl Thompson (a freelance mountaineer from Canterbury, New Zealand), and Robert Walker (Washington University).
Alternative Fuels Data Center: Pittsburgh Livery Company Transports
Customers in Alternative Fuel VehiclesA> Pittsburgh Livery Company Transports Customers in hybrid, propane, and natural gas vehicles to transport customers. For information about this project Television Related Videos Photo of a car Electric Vehicles Charge up at State Parks in West Virginia Dec. 9
The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.
ERIC Educational Resources Information Center
Beckwith, E. Kenneth; Nelson, Christopher
1998-01-01
Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…
A mass storage system for supercomputers based on Unix
NASA Technical Reports Server (NTRS)
Richards, J.; Kummell, T.; Zarlengo, D. G.
1988-01-01
The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.
NASA Astrophysics Data System (ADS)
Schulthess, Thomas C.
2013-03-01
The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.
Control Charts When the Observations Are Correlated.
1987-05-01
l7 D-AiB6 388 CONTROL CHARTS WHEN THE OBSERVATIONS ARE CORRELATED(J) i/iPITTSBURGH UNIV PA CENTER FOR KULTIVARIATE ANALYSIS P R KRISHNAIAH ET AL MAY... Krishnaiah and B. Q. Miao F-49620-85-C-0008 S. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK AREA II WORK UNIT NUMBERS... Krishnaiah and B.Q. Miao Center for Multivariate Analysis University of Pittsburgh N DTIC E L 1Zk-, -I- OCT0 11987 Play 1987 Technical Report No. 87-09
The TESS science processing operations center
NASA Astrophysics Data System (ADS)
Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland
2016-08-01
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
Open Skies Project Computational Fluid Dynamic Analysis
1994-03-01
109 -. -_ _ 9 . CONCLUSIONSI1 f 10. LIST OF REFERENCES _________ ___________112 APPENDIX A: Transition Prediction __________________116 B...Behind the Open Skies Plate 20 8. VSAERO Results on the Alternate Fairing 21 9 . Centerline Cp Comparisons 22 10. VSAERO Wing Effects Study Centerline C...problems. The assistance Mrs. Mary Ann Mages, at Kirtland Supercomputer Center ( PL /SCPR) gave by setting a precedent for supercomputer account
Bridge: Intelligent Tutoring with Intermediate Representations
1988-05-01
Research and Development Center and Psychology Department University of Pittsburgh Pittsburgh, PA. 15260 The Artificial Intelligence and Psychology...problem never introduces more than one unfamiliar plan. Inteligent Tutoring With Intermediate Representations - Bonar and Cunniigbam 4 You must have a... Inteligent Tutoring With ntermediate Representations - Bonar and Cunningham 7 The requirements are specified at four differcnt levels, corresponding to
Processing the CONSOL Energy, Inc. Mine Maps and Records Collection at the University of Pittsburgh
ERIC Educational Resources Information Center
Rougeux, Debora A.
2011-01-01
This article describes the efforts of archivists and student assistants at the University of Pittsburgh's Archives Service Center to organize, describe, store, and provide timely and efficient access to over 8,000 maps of underground coal mines in southwestern Pennsylvania, as well the records that accompanied them, donated by CONSOL Energy, Inc.…
NASA Astrophysics Data System (ADS)
Turnshek, Diane
2015-08-01
Riding on the Pittsburgh mayor’s keen interest in astronomy and the ongoing change of 40,000 city lights from mercury and sodium vapor to shielded LEDs, we organized a series of city-wide celestial art projects to bring attention to the skies over Pittsburgh. Light pollution public talks were held at the University of Pittsburgh’s Allegheny Observatory and other colleges. Earth Hour celebrations kicked off an intensive year of astronomy outreach in the city. Lights went out on March 28, 2015 from 8:30 to 9:30 pm in over fifty buildings downtown and in Oakland (the “Eds and Meds” center, where many Pittsburgh universities and hospitals are located). Our art contest was announced at the De-Light Pittsburgh celebration at the Carnegie Science Center during Astronomy Weekend. “Our Pittsburgh Constellation” is an interactive Google map of all things astronomical in the city. Different colored stars mark locations of planetariums, star parties, classes, observatories, lecture series, museums, telescope manufacturers and participating art galleries. Contest entrants submitted artwork depicting their vision of the constellation figure that incorporates and connects all the “stars” in our custom city map. Throughout the year, over a dozen artists ran workshops on painting star clusters, galaxies, nebulae, comets, planets and aurorae with discussions of light pollution solutions and scientific explanations of what the patrons were painting, including demonstrations with emission tubes and diffraction grating glasses. We will display the celestial art created in this International Year of Light at an art gallery as part of the City’s Department of Innovation & Performance March 2016 Earth Hour gala. We are thankful for the Astronomical Footprint grant from the Heinz Endowments, which allowed us to bring the worlds of science and art together to enact social change.
The TESS Science Processing Operations Center
NASA Technical Reports Server (NTRS)
Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd;
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
2009-09-15
Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall.
TOP500 Supercomputers for June 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
2003-06-23
21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamosmore » National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.« less
Characterizing output bottlenecks in a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Bing; Chase, Jeffrey; Dillow, David A
2012-01-01
Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
None
2018-02-07
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-09-30
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
ERIC Educational Resources Information Center
Heathers, Glen
The Learning Research and Development Center at the University of Pittsburgh, as part of a consortium of 15 educational agencies, is the prime contractor for a project to design, conduct, and diffuse training programs for educational R & D personnel. Four training programs in the areas of curriculum development and the design and conduct of local…
NASA Astrophysics Data System (ADS)
Landgrebe, Anton J.
1987-03-01
An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.
NASA Technical Reports Server (NTRS)
Landgrebe, Anton J.
1987-01-01
An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.
2009-09-15
Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall by Chris Kemp.
Levine, Arthur S; McDonald, Margaret C; Bogosta, Charles E
2017-10-01
In 2011, the University of Pittsburgh School of Medicine (UPSOM) and Tsinghua University formed a partnership to further the education of Tsinghua medical students. These students come to UPSOM as visiting research scholars for two years of their eight-year MD curriculum. During this time, the students, who have completed four years at Tsinghua, work full-time in medical school laboratories and research programs of their choice, essentially functioning as graduate students. In their first two months in Pittsburgh, the scholars have a one-week orientation to biomedical research, followed by two-week rotations in four labs selected on the basis of the scholars' scientific interests, after which they choose one of these labs for the remainder of the two years. Selected labs may be in basic science departments, basic science divisions of clinical departments, or specialized centers that focus on approaches like simulation and modeling. The Tsinghua students also have a brief exposure to clinical medicine. UPSOM has also formed a similar partnership with Central South University Xiangya School of Medicine in Changsha, Hunan Province. The Xiangya students come to UPSOM for two years of research training after their sixth year and, thus, unlike the Tsinghua students, have already completed their clinical rotations. UPSOM faculty members have also paved the way for UPMC (University of Pittsburgh Medical Center), UPSOM's clinical partner, to engage with clinical centers in China. Major relationships involving advisory, training, managerial, and/or equity roles exist with Xiangya International Medical Center, KingMED Diagnostics, First Chengmei Medical Industry Group, and Macare Women's Hospital. Both UPSOM and UPMC are actively exploring other clinical and academic opportunities in China.
NAS technical summaries: Numerical aerodynamic simulation program, March 1991 - February 1992
NASA Technical Reports Server (NTRS)
1992-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefiting other supercomputer centers in Government and industry. This report contains selected scientific results from the 1991-92 NAS Operational Year, March 4, 1991 to March 3, 1992, which is the fifth year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP. The Cray-2, the first generation supercomputer, has four processors, 256 megawords of central memory, and a total sustained speed of 250 million floating point operations per second. The Cray Y-MP, the second generation supercomputer, has eight processors and a total sustained speed of one billion floating point operations per second. Additional memory was installed this year, doubling capacity from 128 to 256 megawords of solid-state storage-device memory. Because of its higher performance, the Cray Y-MP delivered approximately 77 percent of the total number of supercomputer hours used during this year.
NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-09-01
NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less
NAS Technical Summaries, March 1993 - February 1994
NASA Technical Reports Server (NTRS)
1995-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993
NASA Technical Reports Server (NTRS)
1994-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Maeno, T
Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
72. VISITOR'S CENTER, MODEL OF BOILER CHAMBER, AUXILIARY CHAMBER, REACTOR ...
72. VISITOR'S CENTER, MODEL OF BOILER CHAMBER, AUXILIARY CHAMBER, REACTOR AND CANAL (LOCATION T) - Shippingport Atomic Power Station, On Ohio River, 25 miles Northwest of Pittsburgh, Shippingport, Beaver County, PA
NASA Astrophysics Data System (ADS)
Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.
2010-12-01
In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations
2016-11-23
The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.
Military Review: The Professional Journal of the U.S. Army, September-October 2008
2008-10-01
Lovell, “Pittsburgh Innovates: Pitt concussion study shows fMRI and ImPACT improves safe-to-play decisions,” University of Pittsburgh Medical Center...Director, School of Advanced Military Studies Gregory Fontenot, Director, University of Foreign Military and Cultural Studies Lester W. Grau Foreign...Military Studies Office COL Eric Nelson, Director, Battle Command Integration Directorate William G. Robertson Director, Combat Studies Institute COL
Holistic Approach to Data Center Energy Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Steven W
This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.
Establishing a clinical service for the management of sports-related concussions.
Reynolds, Erin; Collins, Michael W; Mucha, Anne; Troutman-Ensecki, Cara
2014-10-01
The clinical management of sports-related concussions is a specialized area of interest with a lack of empirical findings regarding best practice approaches. The University of Pittsburgh Medical Center Sports Concussion Program was the first of its kind; 13 years after its inception, it remains a leader in the clinical management and research of sports-related concussions. This article outlines the essential components of a successful clinical service for the management of sports-related concussions, using the University of Pittsburgh Medical Center Sports Concussion Program as a case example. Drawing on both empirical evidence and anecdotal conclusions from this high-volume clinical practice, this article provides a detailed account of the inner workings of a multidisciplinary concussion clinic with a comprehensive approach to the management of sports-related concussions. A detailed description of the evaluation process and an in-depth analysis of targeted clinical pathways and subtypes of sports-related concussions effectively set the stage for a comprehensive understanding of the assessment, treatment, and rehabilitation model used in Pittsburgh today.
4. VIEW LOOKING NORTHWEST OF FUEL HANDLING BUILDING (CENTER), REACTOR ...
4. VIEW LOOKING NORTHWEST OF FUEL HANDLING BUILDING (CENTER), REACTOR SERVICE BUILDING (RIGHT), MACHINE SHOP (LEFT) - Shippingport Atomic Power Station, On Ohio River, 25 miles Northwest of Pittsburgh, Shippingport, Beaver County, PA
Richard P. Feynman Center for Innovation
Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation self-healing, self-forming mesh network of long range radios. READ MORE supercomputer Los Alamos
Internal computational fluid mechanics on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Andersen, Bernhard H.; Benson, Thomas J.
1987-01-01
The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helland, B.; Summers, B.G.
1996-09-01
As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. Themore » data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.« less
US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-01-01
The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.
Green Supercomputing at Argonne
Beckman, Pete
2018-02-07
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputingâeverything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/
Long-Term file activity patterns in a UNIX workstation environment
NASA Technical Reports Server (NTRS)
Gibson, Timothy J.; Miller, Ethan L.
1998-01-01
As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.
2014-09-01
simulation time frame from 30 days to one year. This was enabled by porting the simulation to the Pleiades supercomputer at NASA Ames Research Center, a...including the motivation for changes to our past approach. We then present the software implementation (3) on the NASA Ames Pleiades supercomputer...significantly updated since last year’s paper [25]. The main incentive for that was the shift to a highly parallel approach in order to utilize the Pleiades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratorymore » (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.« less
Interactive 3D visualization speeds well, reservoir planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petzet, G.A.
1997-11-24
Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinitemore » reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.« less
NASA Astrophysics Data System (ADS)
Xue, Wenhua; Dang, Hongli; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu
2014-03-01
In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of hydrogen has attracted wide attention. We report ab initio molecular dynamics simulations for furfural and hydrogen on the Pd(111) surface at finite temperatures. The simulations demonstrate that the presence of hydrogen is important in promoting furfural conversion. In particular, hydrogen molecules dissociate rapidly on the Pd(111) surface. As a result of such dissociation, atomic hydrogen participates in the reactions with furfural. The simulations also provide detailed information about the possible reactions of hydrogen with furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.
NASA Astrophysics Data System (ADS)
Dang, Hongli; Xue, Wenhua; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu
2014-03-01
We report first-principles density-functional calculations and ab initio molecular dynamics (MD) simulations for the reactions involving furfural, which is an important intermediate in biomass conversion, at the catalytic liquid-solid interfaces. The different dynamic processes of furfural at the water-Cu(111) and water-Pd(111) interfaces suggest different catalytic reaction mechanisms for the conversion of furfural. Simulations for the dynamic processes with and without hydrogen demonstrate the importance of the liquid-solid interface as well as the presence of hydrogen in possible catalytic reactions including hydrogenation and decarbonylation of furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.
An analysis of file migration in a UNIX supercomputing environment
NASA Technical Reports Server (NTRS)
Miller, Ethan L.; Katz, Randy H.
1992-01-01
The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.
Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.
Martin, Brian S; Arbore, Mark
2016-04-01
Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. Copyright © 2016 Elsevier Inc. All rights reserved.
Building diversity in a complex academic health center.
South-Paul, Jeannette E; Roth, Loren; Davis, Paula K; Chen, Terence; Roman, Anna; Murrell, Audrey; Pettigrew, Chenits; Castleberry-Singleton, Candi; Schuman, Joel
2013-09-01
For 30 years, the many diversity-related health sciences programs targeting the University of Pittsburgh undergraduate campus, school of medicine, schools of the health sciences, clinical practice plan, and medical center were run independently and remained separate within the academic health center (AHC). This lack of coordination hampered their overall effectiveness in promoting diversity and inclusion. In 2007, a group of faculty and administrators from the university and the medical center recognized the need to improve institutional diversity and to better address local health disparities. In this article, the authors describe the process of linking the efforts of these institutions in a way that would be successful locally and applicable to other academic environments. First, they engaged an independent consultant to conduct a study of the AHC's diversity climate, interviewing current and former faculty and trainees to define the problem and identify areas for improvement. Next, they created the Physician Inclusion Council to address the findings of this study and to coordinate future efforts with institutional leaders. Finally, they formed four working committees to address (1) communications and outreach, (2) cultural competency, (3) recruitment, and (4) mentoring and retention. These committees oversaw the strategic development and implementation of all diversity and inclusion efforts. Together these steps led to structural changes within the AHC and the improved allocation of resources that have positioned the University of Pittsburgh to achieve not only diversity but also inclusion and to continue to address the health disparities in the Pittsburgh community.
Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; van Rosendale, John; Southard, Dale
2010-12-01
Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level ofmore » treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"« less
30 CFR 780.21 - Hydrologic information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Hydrologic information. 780.21 Section 780.21... Hydrologic information. (a) Sampling and analysis methodology. All water-quality analyses performed to meet... Eastern Technical Service Center, U.S. Department of the Interior, Building 10, Parkway Center, Pittsburgh...
30 CFR 780.21 - Hydrologic information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Hydrologic information. 780.21 Section 780.21... Hydrologic information. (a) Sampling and analysis methodology. All water-quality analyses performed to meet... Eastern Technical Service Center, U.S. Department of the Interior, Building 10, Parkway Center, Pittsburgh...
Next Generation Security for the 10,240 Processor Columbia System
NASA Technical Reports Server (NTRS)
Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)
2005-01-01
This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.
Merging the Machines of Modern Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Laura; Collins, Jim
Two recent projects have harnessed supercomputing resources at the US Department of Energy’s Argonne National Laboratory in a novel way to support major fusion science and particle collider experiments. Using leadership computing resources, one team ran fine-grid analysis of real-time data to make near-real-time adjustments to an ongoing experiment, while a second team is working to integrate Argonne’s supercomputers into the Large Hadron Collider/ATLAS workflow. Together these efforts represent a new paradigm of the high-performance computing center as a partner in experimental science.
NASA Technical Reports Server (NTRS)
Cohen, Jarrett
1999-01-01
Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.
Oral Health in a Sample of Pregnant Women from Northern Appalachia (2011–2015)
Neiswanger, Katherine; McNeil, Daniel W.; Foxman, Betsy; Govil, Manika; Cooper, Margaret E.; Weyant, Robert J.; Shaffer, John R.; Crout, Richard J.; Simhan, Hyagriv N.; Beach, Scott R.; Chapman, Stella; Zovko, Jayme G.; Brown, Linda J.; Strotmeyer, Stephen J.; Maurer, Jennifer L.; Marazita, Mary L.
2015-01-01
Background. Chronic poor oral health has a high prevalence in Appalachia, a large region in the eastern USA. The Center for Oral Health Research in Appalachia (COHRA) has been enrolling pregnant women and their babies since 2011 in the COHRA2 study of genetic, microbial, and environmental factors involved in oral health in Northern Appalachia. Methods. The COHRA2 protocol is presented in detail, including inclusion criteria (healthy, adult, pregnant, US Caucasian, English speaking, and nonimmunocompromised women), recruiting (two sites: Pittsburgh, Pennsylvania, and West Virginia, USA), assessments (demographic, medical, dental, psychosocial/behavioral, and oral microbial samples and DNA), timelines (longitudinal from pregnancy to young childhood), quality control, and retention rates. Results. Preliminary oral health and demographic data are presented in 727 pregnant women, half from the greater Pittsburgh region and half from West Virginia. Despite similar tooth brushing and flossing habits, COHRA2 women in West Virginia have significantly worse oral health than the Pittsburgh sample. Women from Pittsburgh are older and more educated and have less unemployment than the West Virginia sample. Conclusions. We observed different prevalence of oral health and demographic variables between pregnant women from West Virginia (primarily rural) and Pittsburgh (primarily urban). These observations suggest site-specific differences within Northern Appalachia that warrant future studies. PMID:26089906
30 CFR 784.14 - Hydrologic information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Hydrologic information. 784.14 Section 784.14... Hydrologic information. (a) Sampling and analysis. All water quality analyses performed to meet the... Center, U.S. Department of the Interior, Building 10, Parkway Center, Pittsburgh, Pa.; at the OSM Western...
30 CFR 784.14 - Hydrologic information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Hydrologic information. 784.14 Section 784.14... Hydrologic information. (a) Sampling and analysis. All water quality analyses performed to meet the... Center, U.S. Department of the Interior, Building 10, Parkway Center, Pittsburgh, Pa.; at the OSM Western...
NASA's Participation in the National Computational Grid
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)
1998-01-01
Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.
Learning Research and Development Center Publications List Update, 1995.
ERIC Educational Resources Information Center
Pittsburgh Univ., PA. Learning Research and Development Center.
This document presents an annotated listing of articles, conference papers, book chapters, papers, and books published in 1995 as a result of investigations carried on at the University of Pittsburgh's Learning Research and Development Center (LRDC). The publications are organized alphabetically by author and chronologically within each author's…
Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division
2016-06-01
Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
1998-04-25
MA David Cooper, M.D. National Center for HIV Epidemiology and Clinical Research Sydney, NSW, Australia Stephen Follansbee, M.D. Davies...National Association of People with AIDS Washington, DC David Barr, J.D. Forum for Collaborative HIV Research Washington, DC Samuel Bozzette, M.D...Mellors, M.D. University of Pittsburgh Pittsburgh, PA David Nash, M.D. Thomas Jefferson University Philadelphia, PA Sallie Perryman New York
Building black holes: supercomputer cinema.
Shapiro, S L; Teukolsky, S A
1988-07-22
A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.
NASA Center for Climate Simulation (NCCS) Presentation
NASA Technical Reports Server (NTRS)
Webster, William P.
2012-01-01
The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.
San Diego Supercomputer Center
Nile and Zika virusLearn More image Variants in Non-Coding DNA Contribute to Inherited Autism RiskGene mutations appearing for the first time contribute to approximately one-third of cases of autism spectrum
Real World Uses For Nagios APIs
NASA Technical Reports Server (NTRS)
Singh, Janice
2014-01-01
This presentation describes the Nagios 4 APIs and how the NASA Advanced Supercomputing at Ames Research Center is employing them to upgrade its graphical status display (the HUD) and explain why it's worth trying to use them yourselves.
AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2
2011-01-01
area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could
Monitoring Object Library Usage and Changes
NASA Technical Reports Server (NTRS)
Owen, R. K.; Craw, James M. (Technical Monitor)
1995-01-01
The NASA Ames Numerical Aerodynamic Simulation program Aeronautics Consolidated Supercomputing Facility (NAS/ACSF) supercomputing center services over 1600 users, and has numerous analysts with root access. Several tools have been developed to monitor object library usage and changes. Some of the tools do "noninvasive" monitoring and other tools implement run-time logging even for object-only libraries. The run-time logging identifies who, when, and what is being used. The benefits are that real usage can be measured, unused libraries can be discontinued, training and optimization efforts can be focused at those numerical methods that are actually used. An overview of the tools will be given and the results will be discussed.
Particle simulation on heterogeneous distributed supercomputers
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Dagum, Leonardo
1993-01-01
We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.
The transition of a real-time single-rotor helicopter simulation program to a supercomputer
NASA Technical Reports Server (NTRS)
Martinez, Debbie
1995-01-01
This report presents the conversion effort and results of a real-time flight simulation application transition to a CONVEX supercomputer. Enclosed is a detailed description of the conversion process and a brief description of the Langley Research Center's (LaRC) flight simulation application program structure. Currently, this simulation program may be configured to represent Sikorsky S-61 helicopter (a five-blade, single-rotor, commercial passenger-type helicopter) or an Army Cobra helicopter (either the AH-1 G or AH-1 S model). This report refers to the Sikorsky S-61 simulation program since it is the most frequently used configuration.
Computational Nanotechnology at NASA Ames Research Center, 1996
NASA Technical Reports Server (NTRS)
Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.
COOP 3D ARPA Experiment 109 National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
1998-01-01
Coupled atmospheric and hydrodynamic forecast models were executed on the supercomputing resources of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado and the Ohio Supercomputing Center (OSC)in Columbus, Ohio. respectively. The interoperation of the forecast models on these geographically diverse, high performance Cray platforms required the transfer of large three dimensional data sets at very high information rates. High capacity, terrestrial fiber optic transmission system technologies were integrated with those of an experimental high speed communications satellite in Geosynchronous Earth Orbit (GEO) to test the integration of the two systems. Operation over a spacecraft in GEO orbit required modification of the standard configuration of legacy data communications protocols to facilitate their ability to perform efficiently in the changing environment characteristic of a hybrid network. The success of this performance tuning enabled the use of such an architecture to facilitate high data rate, fiber optic quality data communications between high performance systems not accessible to standard terrestrial fiber transmission systems. Thus obviating the performance degradation often found in contemporary earth/satellite hybrids.
Integration of the Chinese HPC Grid in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.
Facilitation of Scientific Concept Learning by Interpretation Procedures and Diagnosis.
1986-08-01
San Diego, CA 92152-6800 Development and Studies Educational Technology Center OP 01 B7 337 Gutman Library Director, Human Factors Washington, DC...Department of Psychology Dr. Stellan Ohlsson Bank Street College of Boulder, CO 80309 Learning R & D Center Education University of Pittsburgh 610 W. 112th...CenterScience, Education , and 1040 Cathcart Way San Diego, CA 92152-6800Transportation Program Stanford. CA 9Q05 Office of Technology Assessment Dr
Medical Research Pays Off for All Americans
... the "March of Dimes." It helped develop two vaccines. The first, by Dr. Jonas Salk at the University of Pittsburgh, in 1955, and the second, in 1962, by Dr. Albert Sabin, at the Cincinnati Children's Hospital Medical Center – have ...
Economics of data center optics
NASA Astrophysics Data System (ADS)
Huff, Lisa
2016-03-01
Traffic to and from data centers is now reaching Zettabytes/year. Even the smallest of businesses now rely on data centers for revenue generation. And, the largest data centers today are orders of magnitude larger than the supercomputing centers of a few years ago. Until quite recently, for most data center managers, optical data centers were nice to dream about, but not really essential. Today, the all-optical data center - perhaps even an all-single mode fiber (SMF) data center is something that even managers of medium-sized data centers should be considering. Economical transceivers are the key to increased adoption of data center optics. An analysis of current and near future data center optics economics will be discussed in this paper.
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
The NFSNET: Beginnings of a National Research Internet.
ERIC Educational Resources Information Center
Catlett, Charles E.
1989-01-01
Describes the development, current status, and possible future of NSFNET, which is a backbone network designed to connect five national supercomputer centers established by the National Science Foundation. The discussion covers the implications of this network for research and national networking needs. (CLB)
8. VIEW FROM NORTHWEST OF CONDENSATE STORAGE TANK (LEFT), PRIMARY ...
8. VIEW FROM NORTHWEST OF CONDENSATE STORAGE TANK (LEFT), PRIMARY WATER STORAGE TANK (CENTER), CANAL WATER STORAGE TANK (RIGHT) (LOCATIONS E,F,D) - Shippingport Atomic Power Station, On Ohio River, 25 miles Northwest of Pittsburgh, Shippingport, Beaver County, PA
High Efficiency Photonic Switch for Data Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaComb, Lloyd J.; Bablumyan, Arkady; Ordyan, Armen
2016-12-06
The worldwide demand for instant access to information is driving internet growth rates above 50% annually. This rapid growth is straining the resources and architectures of existing data centers, metro networks and high performance computer centers. If the current business as usual model continues, data centers alone will require 400TWhr of electricity by 2020. In order to meet the challenges of a faster and more cost effective data centers, metro networks and supercomputing facilities, we have demonstrated a new type of optical switch that will support transmissions speeds up to 1Tb/s, and requires significantly less energy per bit than
Saving Water at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Andy
Los Alamos National Laboratory decreased its water usage by 26 percent in 2014, with about one-third of the reduction attributable to using reclaimed water to cool a supercomputing center. The Laboratory's goal during 2014 was to use only re-purposed water to support the mission at the Strategic Computing Complex. Using reclaimed water from the Sanitary Effluent Reclamation Facility, or SERF, substantially decreased water usage and supported the overall mission. SERF collects industrial wastewater and treats it for reuse. The reclamation facility contributed more than 27 million gallons of re-purposed water to the Laboratory's computing center, a secured supercomputing facility thatmore » supports the Laboratory’s national security mission and is one of the institution’s larger water users. In addition to the strategic water reuse program at SERF, the Laboratory reduced water use in 2014 by focusing conservation efforts on areas that use the most water, upgrading to water-conserving fixtures, and repairing leaks identified in a biennial survey.« less
Saving Water at Los Alamos National Laboratory
Erickson, Andy
2018-01-16
Los Alamos National Laboratory decreased its water usage by 26 percent in 2014, with about one-third of the reduction attributable to using reclaimed water to cool a supercomputing center. The Laboratory's goal during 2014 was to use only re-purposed water to support the mission at the Strategic Computing Complex. Using reclaimed water from the Sanitary Effluent Reclamation Facility, or SERF, substantially decreased water usage and supported the overall mission. SERF collects industrial wastewater and treats it for reuse. The reclamation facility contributed more than 27 million gallons of re-purposed water to the Laboratory's computing center, a secured supercomputing facility that supports the Laboratoryâs national security mission and is one of the institutionâs larger water users. In addition to the strategic water reuse program at SERF, the Laboratory reduced water use in 2014 by focusing conservation efforts on areas that use the most water, upgrading to water-conserving fixtures, and repairing leaks identified in a biennial survey.
Adventures in supercomputing: An innovative program for high school teachers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, C.E.; Hicks, H.R.; Summers, B.G.
1994-12-31
Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials ismore » remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).« less
Research and development targeted at identifying and mitigating Internet security threats require current network data. To fulfill this need... researchers working for the Center for Applied Internet Data Analysis (CAIDA), a program at the San Diego Supercomputer Center (SDSC) which is based at the...vetted network and security researchers using the PREDICT/IMPACT portal and legal framework. We have also contributed to community building efforts that
Picture Wall (Glass Structures)
NASA Technical Reports Server (NTRS)
1978-01-01
Photo shows a subway station in Toronto, Ontario, which is entirely glass-enclosed. The all-glass structure was made possible by a unique glazing concept developed by PPG Industries, Pittsburgh, Pennsylvania, one of the largest U.S. manufacturers of flat glass. In the TVS glazing system, transparent glass "fins" replace conventional vertical support members used to provide support for wind load resistance. For stiffening, silicone sealant bonds the fins to adjacent glass panels. At its glass research center near Pittsburgh, PPG Industries uses the NASTRAN computer program to analyze the stability of enclosures made entirely of glass. The company also uses NASTRAN to simulate stresses on large containers of molten glass and to analyze stress effects of solar heating on flat glass.
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less
Performance Assessment Institute-NV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardo, Joesph
2012-12-31
The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less
Argonne wins four R&D 100 Awards | Argonne National Laboratory
. High-Energy Concentration-Gradient Cathode Material for Plug-in Hybrids and All-Electric Vehicles converting discovery science into innovative, high-impact products, processes and systems." Globus scientific facilities (such as supercomputing centers and high energy physics experiments), cloud storage
Ferraro, Jeffrey P; Ye, Ye; Gesteland, Per H; Haug, Peter J; Tsui, Fuchiang Rich; Cooper, Gregory F; Van Bree, Rudy; Ginter, Thomas; Nowalk, Andrew J; Wagner, Michael
2017-05-31
This study evaluates the accuracy and portability of a natural language processing (NLP) tool for extracting clinical findings of influenza from clinical notes across two large healthcare systems. Effectiveness is evaluated on how well NLP supports downstream influenza case-detection for disease surveillance. We independently developed two NLP parsers, one at Intermountain Healthcare (IH) in Utah and the other at University of Pittsburgh Medical Center (UPMC) using local clinical notes from emergency department (ED) encounters of influenza. We measured NLP parser performance for the presence and absence of 70 clinical findings indicative of influenza. We then developed Bayesian network models from NLP processed reports and tested their ability to discriminate among cases of (1) influenza, (2) non-influenza influenza-like illness (NI-ILI), and (3) 'other' diagnosis. On Intermountain Healthcare reports, recall and precision of the IH NLP parser were 0.71 and 0.75, respectively, and UPMC NLP parser, 0.67 and 0.79. On University of Pittsburgh Medical Center reports, recall and precision of the UPMC NLP parser were 0.73 and 0.80, respectively, and IH NLP parser, 0.53 and 0.80. Bayesian case-detection performance measured by AUROC for influenza versus non-influenza on Intermountain Healthcare cases was 0.93 (using IH NLP parser) and 0.93 (using UPMC NLP parser). Case-detection on University of Pittsburgh Medical Center cases was 0.95 (using UPMC NLP parser) and 0.83 (using IH NLP parser). For influenza versus NI-ILI on Intermountain Healthcare cases performance was 0.70 (using IH NLP parser) and 0.76 (using UPMC NLP parser). On University of Pisstburgh Medical Center cases, 0.76 (using UPMC NLP parser) and 0.65 (using IH NLP parser). In all but one instance (influenza versus NI-ILI using IH cases), local parsers were more effective at supporting case-detection although performances of non-local parsers were reasonable.
1987-09-01
of Intensive Cognitive Diagnosis and Its Implications for Testing Stellan Ohisson Learning Research and Development Center, University of Pittsburgh...7a. NAME OF MONITORING ORGANIZATION Learning Research & Development (if applicable) Cognitive Science Program Center, University of Pittsburg Office of...GAGE All other eaitions are obolSete. UN CLASS " UNLASSI FIED Ohlsson 1 Trace Analysis Knowledge and Understanding in Human Learning Knowledge and
Pronominalization: A Device for Unifying Sentences in Memory
ERIC Educational Resources Information Center
Lesgold, Alan M.
1972-01-01
Based on the author's doctoral dissertation. Experiments supported by a National Science Foundation Predoctoral Fellowship and a National Institute of Mental Health grant; paper preparation supported by the Learning Research and Development Center, University of Pittsburgh, through the U.S. Office of Education. (VM)
9. Photocopy of original construction drawing, dated February 1932 (original ...
9. Photocopy of original construction drawing, dated February 1932 (original print in possession of Veterans Administration, Oakland Branch, Pittsburgh, Pennsylvania). DRAWING 103-26 -- PLOT PLAN -- SHOWING NEW CONSTRUCTION - VA Medical Center, Aspinwall Division, 5103 Delafield Avenue (O'Hara Township), Aspinwall, Allegheny County, PA
10 CFR 600.10 - Form and content of applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Energy Technology Center, Attn: Unsolicited Proposal Manager, Post Office Box 10940, Pittsburgh, PA...: (1) A facesheet containing basic identifying information. The facesheet shall be the Standard Form... electronically, by an official authorized to bind the applicant; or (2) Omits any information or documentation...
RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade
2015-09-30
NOAA ), Robin Hogan (ECMWF), a number of colleagues at the Max-Planck Institute, and Will Sawyer and Marcus Wetzstein (Swiss Supercomputer Center...somewhat out of date, so that the accuracy of our simplified algorithms can not be thoroughly evaluated. RRTMGP_LW_v0 has been provided to our NASA ...support, RRTMGP_LW_v0, has been completed and distributed to selected colleagues at modeling centers, including NOAA , NCAR, and CSCS. Our colleagues
Li, Weizhong
2018-02-12
San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
NSF Director Bloch Stresses Effectiveness and Efficiency.
ERIC Educational Resources Information Center
Lepkowski, Wil
1985-01-01
The text of an interview with Erich Bloch, National Science Foundation (NSF) director, is provided. Among the topics/issues explored are NSF's role in policy research, mission and goals of NSF, establishment of NSF Engineering Research Centers, and national security issues involving access to supercomputers in universities that NSF is funding. (JN)
A One-of-a-Kind Technology Expansion.
ERIC Educational Resources Information Center
Wiens, Janet
2002-01-01
Describes the design of the expansion of the National Center for Supercomputing Applications (NCSA) Advanced Computation Building at the University of Illinois, Champaign. Discusses how the design incorporated column-free space for flexibility, cooling capacity, a freight elevator, and a 6-foot raised access floor to neatly house airflow, wiring,…
When Rural Reality Goes Virtual.
ERIC Educational Resources Information Center
Husain, Dilshad D.
1998-01-01
In rural towns where sparse population and few business are barriers, virtual reality may be the only way to bring work-based learning to students. A partnership between a small-town high school, the Ohio Supercomputer Center, and a high-tech business will enable students to explore the workplace using virtual reality. (JOW)
The NAS kernel benchmark program
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barton, J. T.
1985-01-01
A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.
Refining and end use study of coal liquids : quarterly report, July-September 1996.
DOT National Transportation Integrated Search
1996-01-01
Bechtel, with Southwest Research Institute, Amoco Oil R&D, and the M.W. Kellog Co. as subcontractors, initiated a study on November 1, 1993 for the US Department of Energy's (DOE's) Pittsburgh Energy Technology Center (PETC) to determine the most cos...
Refining and end use study of coal liquids : quarterly report, April-June 1997.
DOT National Transportation Integrated Search
1997-01-01
Bechtel, with Southwest Research Institute, Amoco Oil R&D, and the M.W. Kellog Co. as subcontractors, initiated a study on November 1, 1993 for the US Department of Energy's (DOE's) Pittsburgh Energy Technology Center (PETC) to determine the most cos...
Refining and end use study of coal liquids : quarterly report, October-December 1996.
DOT National Transportation Integrated Search
1996-01-01
Bechtel, with Southwest Research Institute, Amoco Oil R&D, and the M.W. Kellog Co. as subcontractors, initiated a study on November 1, 1993 for the US Department of Energy's (DOE's) Pittsburgh Energy Technology Center (PETC) to determine the most cos...
Refining and end use study of coal liquids : quarterly report, January-March 1997.
DOT National Transportation Integrated Search
1997-01-01
Bechtel, with Southwest Research Institute, Amoco Oil R&D, and the M.W. Kellog Co. as subcontractors, initiated a study on November 1, 1993 for the US Department of Energy's (DOE's) Pittsburgh Energy Technology Center (PETC) to determine the most cos...
77 FR 27863 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... Therapy (CWT), HUD/VA Prevention pilot, and Supportive Services for Veterans and Families (SSVF). This... Veterans and their immediate family members, members of the armed services, current and former employees... Location Addiction Severity Index Veteran Affairs Medical Center, 7180 Highland Drive, Pittsburgh, PA 15206...
Refining and end use study of coal liquids : quarterly report, July-September 1997.
DOT National Transportation Integrated Search
1997-01-01
Bechtel, with Southwest Research Institute, Amoco Oil R&D, and the M.W. Kellog Co. as subcontractors, initiated a study on November 1, 1993 for the US Department of Energy's (DOE's) Pittsburgh Energy Technology Center (PETC) to determine the most cos...
30 CFR 90.209 - Respirable dust samples; transmission by operator.
Code of Federal Regulations, 2014 CFR
2014-07-01
... operator. 90.209 Section 90.209 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-COAL MINERS WHO HAVE EVIDENCE OF THE... cassette to: Respirable Dust Processing Laboratory, Pittsburgh Safety and Health Technology Center, Cochran...
30 CFR 70.209 - Respirable dust samples; transmission by operator.
Code of Federal Regulations, 2014 CFR
2014-07-01
... operator. 70.209 Section 70.209 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-UNDERGROUND COAL MINES Sampling Procedures... Laboratory, Pittsburgh Safety and Health Technology Center, Cochran Mill Road, Building 38, P.O. Box 18179...
30 CFR 70.209 - Respirable dust samples; transmission by operator.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator. 70.209 Section 70.209 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-UNDERGROUND COAL MINES Sampling Procedures... Laboratory, Pittsburgh Safety and Health Technology Center, Cochran Mill Road, Building 38, P.O. Box 18179...
30 CFR 90.209 - Respirable dust samples; transmission by operator.
Code of Federal Regulations, 2012 CFR
2012-07-01
... operator. 90.209 Section 90.209 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-COAL MINERS WHO HAVE EVIDENCE OF THE... cassette to: Respirable Dust Processing Laboratory, Pittsburgh Safety and Health Technology Center, Cochran...
30 CFR 70.209 - Respirable dust samples; transmission by operator.
Code of Federal Regulations, 2012 CFR
2012-07-01
... operator. 70.209 Section 70.209 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-UNDERGROUND COAL MINES Sampling Procedures... Laboratory, Pittsburgh Safety and Health Technology Center, Cochran Mill Road, Building 38, P.O. Box 18179...
30 CFR 90.209 - Respirable dust samples; transmission by operator.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator. 90.209 Section 90.209 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-COAL MINERS WHO HAVE EVIDENCE OF THE... cassette to: Respirable Dust Processing Laboratory, Pittsburgh Safety and Health Technology Center, Cochran...
A History of High-Performance Computing
NASA Technical Reports Server (NTRS)
2006-01-01
Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.
NASA Astrophysics Data System (ADS)
Zipf, Edward C.; Erdman, Peeter W.
1994-08-01
The University of Pittsburgh Space Physics Group in collaboration with the Army Research Office (ARO) modeling team has completed a systematic organization of the shock and plume spectral data and the electron temperature and density measurements obtained during the BowShock I and II rocket flights which have been submitted to the AEDC Data Center, has verified the presence of CO Cameron band emission during the Antares engine burn and for an extended period of time in the post-burn plume, and have adapted 3-D radiation entrapment codes developed by the University of Pittsburgh to study aurora and other atmospheric phenomena that involve significant spatial effects to investigate the vacuum ultraviolet (VUV) and extreme ultraviolet (EUV) envelope surrounding the re-entry that create an extensive plasma cloud by photoionization.
Magnetic Ultrathin Films: Multilayers and Surfaces, Interfaces and Characterization
1993-04-01
Copyright Clearance Center, Salem , Massachusetts. Published by: Materials Research Society 9,S30 McKnight Road Pittsburgh, Pennsylvania 15237 Telephone...magnetization are parallel to each other, and pF4 = 2 PFA (l-l) to be the resistivity when they are antiparallel. Here PF is the F resistivity measured
The Professional Educator: Pittsburgh's Winning Partnership
ERIC Educational Resources Information Center
Hamill, Sean D.
2011-01-01
Professional educators--whether in the classroom, library, counseling center, or anywhere in between--share one overarching goal: seeing all students succeed in school and life. In this regular feature, the work of professional educators is explored--not just their accomplishments, but also their challenges--so that the lessons they have learned…
ERIC Educational Resources Information Center
Champagne, Audrey B.
The paper related what has been accomplished during the past six years in the development of IPI Science at the Learning Research and Development Center, University of Pittsburgh. There have been three major accomplishments: (1) Goals for the program have been established, (2) The serious problems that face a group of curriculum developers…
1. EAST AND SOUTH SIDES OF STATION, SHOWING (LEFT BACKGROUND ...
1. EAST AND SOUTH SIDES OF STATION, SHOWING (LEFT BACKGROUND TO CENTER FOREGROUND) SOUTH CANOPY, OPEN CONCOURSE ROOF, AND CONCOURSE ROOF EXTENSION (SMALL BUILDING UNDER CONCOURSE ROOF IS TEMPORARY AMTRAK STATION) - Pennsylvania Railroad Station, Open Concourse & Concourse Roof Extension, 1101 Liberty Avenue, Pittsburgh, Allegheny County, PA
13. Photocopy of original construction drawing, dated November 1932 (original ...
13. Photocopy of original construction drawing, dated November 1932 (original print in possession of Veterans Administration, Oakland Branch, Pittsburgh, Pennsylvania). DRAWING 32-3 -- ADMINISTRATION BUILDING, BUILDING NO. 32 -- FIRST AND SECOND FLOOR PLANS. - VA Medical Center, Aspinwall Division, Administration Building, 5103 Delafield Avenue, Aspinwall, Allegheny County, PA
12. Photocopy of original construction drawing, dated November 1932 (original ...
12. Photocopy of original construction drawing, dated November 1932 (original print in possession of Veterans Administration, Oakland Branch, Pittsburgh, Pennsylvania). DRAWING 32-2 -- ADMINISTRATION BUILDING, BUILDING NO. 32 -- BASEMENT PLAN AND CUPOLA DETAIL. - VA Medical Center, Aspinwall Division, Administration Building, 5103 Delafield Avenue, Aspinwall, Allegheny County, PA
Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center
NASA Astrophysics Data System (ADS)
Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.
2012-12-01
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.
BigData and computing challenges in high energy and nuclear physics
NASA Astrophysics Data System (ADS)
Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.
2017-06-01
In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''
NASA Astrophysics Data System (ADS)
Xue, Wenhua; Borja, Miguel Gonzalez; Resasco, Daniel E.; Wang, Sanwu
2015-03-01
In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of water has attracted wide attention. Recent experiments showed that the proportion of alcohol product from catalytic reactions of furfural conversion with palladium in the presence of water is significantly increased, when compared with other solvent including dioxane, decalin, and ethanol. We investigated the microscopic mechanism of the reactions based on first-principles quantum-mechanical calculations. We particularly identified the important role of water and the liquid/solid interface in furfural conversion. Our results provide atomic-scale details for the catalytic reactions. Supported by DOE (DE-SC0004600). This research used the supercomputer resources at NERSC, of XSEDE, at TACC, and at the Tandy Supercomputing Center.
A Look at the Impact of High-End Computing Technologies on NASA Missions
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart
2012-01-01
From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.
A new understanding of inert gas narcosis
NASA Astrophysics Data System (ADS)
Meng, Zhang; Yi, Gao; Haiping, Fang
2016-01-01
Anesthetics are extremely important in modern surgery to greatly reduce the patient’s pain. The understanding of anesthesia at molecular level is the preliminary step for the application of anesthetics in clinic safely and effectively. Inert gases, with low chemical activity, have been found to cause anesthesia for centuries, but the mechanism is unclear yet. In this review, we first summarize the progress of theories about general anesthesia, especially for inert gas narcosis, and then propose a new hypothesis that the aggregated rather than the dispersed inert gas molecules are the key to trigger the narcosis to explain the steep dose-response relationship of anesthesia. Project supported by the Supercomputing Center of Chinese Academy of Sciences in Beijing, China, the Shanghai Supercomputer Center, China, the National Natural Science Foundation of China (Grant Nos. 21273268, 11290164, and 11175230), the Startup Funding from Shanghai Institute of Applied Physics, Chinese Academy of Sciences (Grant No. Y290011011), “Hundred People Project” from Chinese Academy of Sciences, and “Pu-jiang Rencai Project” from Science and Technology Commission of Shanghai Municipality, China (Grant No. 13PJ1410400).
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
Genetic and Environmental Pathways in Type 1 Diabetes Complications
2010-09-26
setting? In the event that the project identifies a set of strongly predictive biomarkers an appropriate next step would be to approach TrialNet (see...http://www.diabetestrialnet.org). The TrialNet organization is a multi center study with the goal of identifying subjects for T1D prevention and...intervention trials. Children’s Hospital of Pittsburgh (CHP) is already acting as a clinical center for the TrialNet natural history study (Mahon et al
The Eighth Data Release Of The Sloan Digital Sky Survey: First Data From SDSS-3
2011-04-01
Sunspot, NM 88349, USA 14 Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003, USA 15 Department of...Park, PA 16802, USA 18 Institute of Cosmology and Gravitation (ICG), Dennis Sciama Building, Burnaby Road, University of Portsmouth, Portsmouth, PO1 3FX... Cosmology , Carnegie Mellon University, Pittsburgh, P.A. 15213, USA 26 Yale Center for Astronomy and Astrophysics, Yale University, New Haven, CT 06520
2017-12-08
A NASA Center for Climate Simulation supercomputer model that shows the flow of #Blizzard2016 thru Sunday. Learn more here: go.nasa.gov/1WBm547 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Harrison Ford Tapes Climate Change Show at Ames (Reporter Package)
2014-04-11
Hollywood legend Harrison Ford made a special visit to NASA's Ames Research Center to shoot an episode for a new documentary series about climate change called 'Years of Living Dangerously.' After being greeted by Center Director Pete Worden, Ford was filmed meeting with NASA climate scientists and discussed global temperature prediction data processed using one of the world's fastest supercomputers at Ames. Later he flew in the co-pilot seat in a jet used to gather data for NASA air quality studies.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-11
... announced below concerns Occupational Safety and Health Education and Research Centers (ERC) PAR 10-217... evaluation of applications received in response to ``Occupational Safety and Health Education and Research... Officer, CDC/NIOSH, 626 Cochrans Mill Road, Mailstop P-05, Pittsburgh, Pennsylvania 15236, Telephone: (412...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-27
... custodians. I also determine that the exhibition or display of the exhibit objects at the Missouri History... Senator John Heinz History Center, Pittsburgh, Pennsylvania, from on or about October 2, 2010, until on or about January 9, 2011, the Museum of Art, Nova Southeastern University, Fort Lauderdale, Florida, from...
2016-10-27
Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 © 2016 Carnegie Mellon University [DISTRIBUTION STATEMENT A: This... Carnegie Mellon University [DISTRIBUTION STATEMENT A: This material has been approved for public release and unlimited distribution] Copyright 2016 Carnegie ... Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center sponsored by
Toward the Thinking Curriculum: Current Cognitive Research. 1989 ASCD Yearbook.
ERIC Educational Resources Information Center
Resnick, Lauren B., Ed.; Klopfer, Leopold E., Ed.
A project of the Center for the Study of Learning at the University of Pittsburgh, this yearbook combines the two major trends/concerns impacting the future of educational development for the next decade: knowledge and thinking. The yearbook comprises the following chapters: (1) "Toward the Thinking Curriculum: An Overview" (Lauren B.…
Light field otoscope design for 3D in vivo imaging of the middle ear
Bedard, Noah; Shope, Timothy; Hoberman, Alejandro; Haralam, Mary Ann; Shaikh, Nader; Kovačević, Jelena; Balram, Nikhil; Tošić, Ivana
2016-01-01
We present a light field digital otoscope designed to measure three-dimensional shape of the tympanic membrane. This paper describes the optical and anatomical considerations we used to develop the prototype, along with the simulation and experimental measurements of vignetting, field curvature, and lateral resolution. Using an experimental evaluation procedure, we have determined depth accuracy and depth precision of our system to be 0.05–0.07 mm and 0.21–0.44 mm, respectively. To demonstrate the application of our light field otoscope, we present the first three-dimensional reconstructions of tympanic membranes in normal and otitis media conditions, acquired from children who participated in a feasibility study at the Children’s Hospital of Pittsburgh of the University of Pittsburgh Medical Center. PMID:28101416
Chapter 11: City-Wide Collaborations for Urban Climate Education
NASA Technical Reports Server (NTRS)
Snyder, Steven; Hoffstadt, Rita Mukherjee; Allen, Lauren B.; Crowley, Kevin; Bader, Daniel A.; Horton, Radley M.
2014-01-01
Although cities cover only 2 percent of the Earth's surface, more than 50 percent of the world's people live in urban environments, collectively consuming 75 percent of the Earth's resources. Because of their population densities, reliance on infrastructure, and role as centers of industry, cities will be greatly impacted by, and will play a large role in, the reduction or exacerbation of climate change. However, although urban dwellers are becoming more aware of the need to reduce their carbon usage and to implement adaptation strategies, education efforts on these strategies have not been comprehensive. To meet the needs of an informed and engaged urban population, a more systemic, multiplatform and coordinated approach is necessary. The Climate and Urban Systems Partnership (CUSP) is designed to explore and address this challenge. Spanning four cities-Philadelphia, New York, Pittsburgh, and Washington, DC-the project is a partnership between the Franklin Institute, the Columbia University Center for Climate Systems Research, the University of Pittsburgh Learning Research and Development Center, Carnegie Museum of Natural History, New York Hall of Science, and the Marian Koshland Science Museum of the National Academy of Sciences. The partnership is developing a comprehensive, interdisciplinary network to educate urban residents about climate science and the urban impacts of climate change.
Supercomputer applications in molecular modeling.
Gund, T M
1988-01-01
An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.
ERIC Educational Resources Information Center
Girill, T. R.; And Others
1991-01-01
Describes enhancements made to a hypertext information retrieval system at the National Energy Research Supercomputer Center (NERSC) called DFT (Document, Find, and Theseus). The enrichment of DFT's entry vocabulary is described, DFT and other hypertext systems are compared, and problems that occur due to the need for frequent updates are…
None
2018-01-16
The Red Sky/Red Mesa supercomputing platform dramatically reduces the time required to simulate complex fuel models, from 4-6 months to just 4 weeks, allowing researchers to accelerate the pace at which they can address these complex problems. Its speed also reduces the need for laboratory and field testing, allowing for energy reduction far beyond data center walls.
The Central Asian Journal of Global Health to Increase Scientific Productivity.
Freese, Kyle; Shubnikov, Eugene; LaPorte, Ron; Adambekov, Shalkar; Askarova, Sholpan; Zhumadilov, Zhaxybay; Linkov, Faina
2013-01-01
The WHO Collaborating Center at the University of Pittsburgh, USA partnering with Nazarbayev University, developed the Central Asian Journal of Global Health (CAJGH, cajgh.pitt.edu) in order to increase scientific productivity in Kazakhstan and Central Asia. Scientists in this region often have difficulty publishing in upper tier English language scientific journals due to language barriers, high publication fees, and a lack of access to mentoring services. CAJGH seeks to help scientists overcome these challenges by providing peer-reviewed publication free of change with English and research mentoring services available to selected authors. CAJGH began as a way to expand the Supercourse scientific network (www.pitt.edu/~super1) in the Central Asian region in order to rapidly disseminate educational materials. The network began with approximately 60 individuals in five Central Asian countries and has grown to over 1,300 in a few short years. The CAJGH website receives nearly 900 visits per month. The University of Pittsburgh's "open access publishing system" was utilized to create CAJGH in 2012. There are two branches of the CAJGH editorial board: Astana (at the Center for Life Sciences, Nazarbayev University) and Pittsburgh (WHO Collaborating Center). Both are comprised of leading scientists and expert staff who work together throughout the review and publication process. Two complete issues have been published since 2012 and a third is now underway. Even though CAJGH is a new journal, the editorial board uses a rigorous review process; fewer than 50% of all submitted articles are forwarded to peer review or accepted for publication. Furthermore, in 2014, CAJGH will apply to be cross referenced in PubMed and Scopes. CAJGH is one of the first English language journals in the Central Asian region that reaches a large number of scientists. This journal fills a unique niche that will assist scientists in Kazakhstan and Central Asia publish their research findings and share their knowledge with others around the region and the world.
Guidon-Watch: A Graphic Interface for Viewing a Knowledge-Based System.
1985-08-01
Research Center Department of the Navy University of Illinois Boerhaavelaan 2 Washington, DO 20350-1000 Department of Cmpuer Science2334 EN Leyden 1304...Dr. William L. Maloy 270 Crown Street Pittsburgh, PA 15213 Chief of Naval Education New Haven, CT 06510 and Training Dr. Kathleen IaPiana Naval Air...Mclean, VA 22102 1250 West 6th Street Campus Box 345 San Diego, CA 92101 Boulder, CO 8D302 Dr. Aan M. lesgoldlearning M.D Center Dr. Kathleen McKeown Dr
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
Role of High-End Computing in Meeting NASA's Science and Engineering Challenges
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Tu, Eugene L.; Van Dalsem, William R.
2006-01-01
Two years ago, NASA was on the verge of dramatically increasing its HEC capability and capacity. With the 10,240-processor supercomputer, Columbia, now in production for 18 months, HEC has an even greater impact within the Agency and extending to partner institutions. Advanced science and engineering simulations in space exploration, shuttle operations, Earth sciences, and fundamental aeronautics research are occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. This talk describes how the integrated production environment fostered at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center is accelerating scientific discovery, achieving parametric analyses of multiple scenarios, and enhancing safety for NASA missions. We focus on Columbia s impact on two key engineering and science disciplines: Aerospace, and Climate. We also discuss future mission challenges and plans for NASA s next-generation HEC environment.
The role of graphics super-workstations in a supercomputing environment
NASA Technical Reports Server (NTRS)
Levin, E.
1989-01-01
A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
Data-intensive computing on numerically-insensitive supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, James P; Fasel, Patricia K; Habib, Salman
2010-12-03
With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.
Automated Laser Depainting of Aircraft Survey of Enabling Technologies
1991-01-01
Dreyer (414)785-3400 AEG PO Box 160 Pittsburgh, PA 16230 Advanced Mmnu&chuig Systems 6075 The Corners Parkway Norcrou, GA 48120 (404)448-6700...M9L ZW7 (416)745-9630 TOKICO 1203 Commerce Drive South Dearborn, MI 48120 (313)33.-•280 US Air Force Sacramento Air Logistics Center McClellan Air
The Beliefs and Behaviors of Pupils in an Experimental School: The Science Lab.
ERIC Educational Resources Information Center
Lancy, David F.
This booklet, the second in a series, reports on the results of a year-long research project conducted in an experimental school associated with the Learning Research and Development Center, University of Pittsburgh. Specifically, this is a report of findings pertaining to one major setting in the experimental school, the science lab. The science…
Computer Electromagnetics and Supercomputer Architecture
NASA Technical Reports Server (NTRS)
Cwik, Tom
1993-01-01
The dramatic increase in performance over the last decade for microporcessor computations is compared with that for the supercomputer computations. This performance, the projected performance, and a number of other issues such as cost and the inherent pysical limitations in curent supercomputer technology have naturally led to parallel supercomputers and ensemble of interconnected microprocessors.
75 FR 9867 - University of Pittsburgh, et al
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-04
... DEPARTMENT OF COMMERCE International Trade Administration University of Pittsburgh, et al.; Notice of Consolidated Decision on Applications for Duty-Free Entry of Electron Microscopes This is a...: University of Pittsburgh, Pittsburgh, PA 15260. Instrument: Electron Microscope. Manufacturer: JEOL, Ltd...
NASA Astrophysics Data System (ADS)
Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.
2015-12-01
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
Use of high performance networks and supercomputers for real-time flight simulation
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1993-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.
Comparisons and Challenges of Modern Neutrino Scattering Experiments (TENSIONS2016 Report)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Betancourt, M.; et al.
Over the last decade, there has been enormous effort to measure neutrino interaction cross sections important to oscillation experiments. However, a number of results from modern experiments appear to be in tension with each other, despite purporting to measure the same processes. The TENSIONS2016 workshop was held at University of Pittsburgh July 24-31, 2016 and was sponsored by the Pittsburgh High Energy Physics, Astronomy, and Cosmology Center (PITT-PACC). The focus was on bringing experimentalists from three experiments together to compare results in detail and try to find the source of tension by clarifying and comparing signal definitions and the analysismore » strategies used for each measurement. A set of comparisons between the measurements using a consistent set of models was also made. This paper summarizes the main conclusions of that work.« less
PERSPECTIVE VIEW FROM NORTHWEST OF PITTSBURGH HIGH SCHOOL FOR THE ...
PERSPECTIVE VIEW FROM NORTHWEST OF PITTSBURGH HIGH SCHOOL FOR THE CREATIVE AND PERFORMING ARTS, BUILT 2003 BY THE FIRM OF MACLACHLAN CORNELIUS AND FILONI. - Pittsburgh High School for the Creative & Performing Arts, 111 Ninth Street, Pittsburgh, Allegheny County, PA
NASA Technical Reports Server (NTRS)
VanZandt, John
1994-01-01
The usage model of supercomputers for scientific applications, such as computational fluid dynamics (CFD), has changed over the years. Scientific visualization has moved scientists away from looking at numbers to looking at three-dimensional images, which capture the meaning of the data. This change has impacted the system models for computing. This report details the model which is used by scientists at NASA's research centers.
Tools for 3D scientific visualization in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.
Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.
Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S
2011-01-01
Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?
Application of technology developed for flight simulation at NASA. Langley Research Center
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1991-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.
3. Photocopy of original drawing belonging to the Pittsburgh Department ...
3. Photocopy of original drawing belonging to the Pittsburgh Department of Public Works, (n.d.). DRAWING NO. 1963: STRESS AND SECTION SHEET FOR 531' STEEL SPANS. - North Side Point Bridge, Spanning Allegheny River at Point of Pittsburgh, Pittsburgh, Allegheny County, PA
Spatiotemporal modeling of node temperatures in supercomputers
Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...
2016-06-10
Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-04
... CFR 51.918), a final determination of attainment suspends the CAA requirements for the Pittsburgh Area... Ozone Standard for the Pittsburgh-Beaver Valley Moderate Nonattainment Area AGENCY: Environmental... determinations regarding the Pittsburgh-Beaver Valley 1997 8-hour ozone nonattainment area (the Pittsburgh Area...
ERIC Educational Resources Information Center
Wang, Yuanyuan
2013-01-01
This study investigates the impact of students' participation in the certificate program offered by the Asian Studies Center (ASC) at the University of Pittsburgh on their perception of global competency and skills development for international careers. Undergraduate and graduate students who were enrolled in the ASC's certificate program as of…
2016-07-01
ARL-TR-7729 ● JULY 2016 US Army Research Laboratory US Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance...TR-7729 ● JULY 2016 US Army Research Laboratory US Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance 2014 Capstone...National Robotics Engineering Center, Pittsburgh, PA Robert Dean, Terence Keegan, and Chip Diberardino General Dynamics Land Systems, Westminster
ERIC Educational Resources Information Center
Stein, Joan, Ed.; Kyrillidou, Martha, Ed.; Davis, Denise, Ed.
This Fourth Northumbria International Conference on Performance Measurement in Libraries and Information Services centered on the theme of "meaningful measures for emerging realities" and contributors surveyed the field of performance measurement from that perspective. The proceedings begins with seven keynote and invited papers from…
ERIC Educational Resources Information Center
Lesgold, Alan M., Ed.; Reif, Frederick, Ed.
The full proceedings are provided here of a conference of 40 teachers, educational researchers, and scientists from both the public and private sectors that centered on the future of computers in education and the research required to realize the computer's educational potential. A summary of the research issues considered and suggested means for…
Robert Shoemaker, PhD | Division of Cancer Prevention
Dr. Robert Shoemaker obtained his PhD in human genetics from the Graduate School of Public Health of the University of Pittsburgh in 1975. Following postdoctoral experience at the Armed Forces Institute of Pathology he moved to the Children's Hospital Medical Center of Akron. His research on pediatric tumors led to an interest in the genetics of drug resistance and new drug
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
NASA Astrophysics Data System (ADS)
Zhao, Liang; Xu, Shun; Tu, Yu-Song; Zhou, Xin
2017-06-01
Not Available Project supported by the National Natural Science Foundation for Outstanding Young Scholars, China (Grant No. 11422542), the National Natural Science Foundation of China (Grant Nos. 11605151 and 11675138), and the Shanghai Supercomputer Center of China and Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund (the second phase).
This report summarizes work conducted at the United States Army Corps of Engineers (USACE) Pittsburgh Engineering Warehouse and Repair Station (PEWARS) and Emsworth Locks and Dams in Pittsburgh, Pennsylvania under the U.S. Environmental Protection Agency's (EPA's) Waste Reduction...
PNNL streamlines energy-guzzling computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, Mary T.; Marquez, Andres
In a room the size of a garage, two rows of six-foot-tall racks holding supercomputer hard drives sit back-to-back. Thin tubes and wires snake off the hard drives, slithering into the corners. Stepping between the rows, a rush of heat whips around you -- the air from fans blowing off processing heat. But walk farther in, between the next racks of hard drives, and the temperature drops noticeably. These drives are being cooled by a non-conducting liquid that runs right over the hardworking processors. The liquid carries the heat away in tubes, saving the air a few degrees. This ismore » the Energy Smart Data Center at Pacific Northwest National Laboratory. The bigger, faster, and meatier supercomputers get, the more energy they consume. PNNL's Andres Marquez has developed this test bed to learn how to train the behemoths in energy efficiency. The work will help supercomputers perform better as well. Processors have to keep cool or suffer from "thermal throttling," says Marquez. "That's the performance threshold where the computer is too hot to run well. That threshold is an industry secret." The center at EMSL, DOE's national scientific user facility at PNNL, harbors several ways of experimenting with energy usage. For example, the room's air conditioning is isolated from the rest of EMSL -- pipes running beneath the floor carry temperature-controlled water through heat exchangers to cooling towers outside. "We can test whether it's more energy efficient to cool directly on the processing chips or out in the water tower," says Marquez. The hard drives feed energy and temperature data to a network server running specially designed software that controls and monitors the data center. To test the center’s limits, the team runs the processors flat out – not only on carefully controlled test programs in the Energy Smart computers, but also on real world software from other EMSL research, such as regional weather forecasting models. Marquez's group is also developing "power aware computing", where the computer programs themselves perform calculations more energy efficiently. Maybe once computers get smart about energy, they'll have tips for their users.« less
Telemedicine in pediatric cardiac critical care.
Munoz, Ricardo A; Burbano, Nelson H; Motoa, María V; Santiago, Gabriel; Klevemann, Matthew; Casilli, Jeanne
2012-03-01
To describe our international telemedicine experience in pediatric cardiac critical care. This is a case series of pediatric patients teleassisted from the Cardiac Intensive Care Unit (CICU) at Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, Pittsburgh, PA, to the CICU at Hospital Valle del Lili, Cali, Valle, Colombia, between March and December 2010. An attending intensivist from the CICU in Pittsburgh reviewed cases, monitored real-time vital signs, and gave formal medical advice as requested by the attending physician in Cali. The network connection is a Cisco (San Jose, CA)-based Secure Sockets Layer virtual private network via the Internet that allows access to the web-based interface of the Dräger(®) (Lübeck, Germany) physiological monitor system. The videoconferencing equipment consists of a standard component on a custom-made mobile cart that uses an APC(®) (West Kingston, RI) uninterruptible power supply for portable power and 3Com(®) (Hewlett-Packard, Palo Alto, CA) for wireless connectivity. A post-intervention survey regarding satisfaction with the telemedicine service was conducted. Seventy-one recommendations were given regarding 53 patients. Median age and weight were 10 months and 7.1 kg, respectively. Ventricular septal defect, transposition of the great vessels, and single ventricle accounted for most cases. The most frequent recommendations were related to surgical conduct, management of arrhythmias, and performance of cardiac catheterization studies. No technical difficulties were experienced during the monitoring of the patients. Satisfaction rates were equally high for technical and medical aspects of telemedicine service. Telemedicine is a feasible option for pediatric intensivists seeking experienced assistance in the management of complex cardiac patients. Real-time remote assistance may improve the medical care of pediatric cardiac patients treated in developing countries.
Aging and space flight: findings from the University of Pittsburgh
NASA Technical Reports Server (NTRS)
Monk, T. H.
1999-01-01
For more than a decade, the Sleep and Chronobiology Center (SCC) at the University of Pittsburgh has received funding from the National Institute on Aging (NIA), the National Institute of Mental Health (NIMH) and the National Aeronautics and Space Administration (NASA) in order to study the sleep and circadian rhythms of healthy older people, as well as the sleep and circadian rhythms of astronauts and cosmonauts. We have always been struck by the strong synergism between the two endeavors. What happens to the sleep and circadian rhythms of people removed from the terrestrial time cues of Earth is in many ways similar to what happens to people who are advancing in years. Most obviously, sleep is shorter and sleep depth is reduced, but there are also more subtle similarities between the two situations, both in circadian rhythms and in sleep, and in the adaptive strategies needed to enhance 24h zeitgebers.
TOP500 Supercomputers for June 2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
2004-06-23
23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.
Automotive applications of superconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginsberg, M.
1987-01-01
These proceedings compile papers on supercomputers in the automobile industry. Titles include: An automotive engineer's guide to the effective use of scalar, vector, and parallel computers; fluid mechanics, finite elements, and supercomputers; and Automotive crashworthiness performance on a supercomputer.
Understanding Lustre Internals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Feiyi; Oral, H Sarp; Shipman, Galen M
2009-04-01
Lustre was initiated and funded, almost a decade ago, by the U.S. Department of Energy (DoE) Office of Science and National Nuclear Security Administration laboratories to address the need for an open source, highly-scalable, high-performance parallel filesystem on by then present and future supercomputing platforms. Throughout the last decade, it was deployed over numerous medium-to-large-scale supercomputing platforms and clusters, and it performed and met the expectations of the Lustre user community. As it stands at the time of writing this document, according to the Top500 list, 15 of the top 30 supercomputers in the world use Lustre filesystem. This reportmore » aims to present a streamlined overview on how Lustre works internally at reasonable details including relevant data structures, APIs, protocols and algorithms involved for Lustre version 1.6 source code base. More importantly, it tries to explain how various components interconnect with each other and function as a system. Portions of this report are based on discussions with Oak Ridge National Laboratory Lustre Center of Excellence team members and portions of it are based on our own understanding of how the code works. We, as the authors team bare all responsibilities for all errors and omissions in this document. We can only hope it helps current and future Lustre users and Lustre code developers as much as it helped us understanding the Lustre source code and its internal workings.« less
Geothermal technology publications and related reports: a bibliography, January 1977-December 1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, S.R.
1981-04-01
This bibliograhy lists titles, authors, abstracts, and reference information for publications which have been published in the areas of drilling technology, logging instrumentation, and magma energy during the period 1977-1980. These publications are the results of work carried on at Sandia National Laboratories and their subcontractors. Some work was also done in conjunction with the Morgantown, Bartlesville, and Pittsburgh Energy Technology Centers.
Materials Research Center, University of Pittsburgh
1994-04-29
porosities could be synthesized with active protein contained within the material. The incorporation of proteins into polyacrylates seres as a model system...enzyme with PEG and incorporate the modified enzyme into polyacrylates (Figure IV.D.3.1). The activity and stability of the functionalized enzyme have been... polyacrylate polymer. By varying the ratio of solvent (chloroform) and non-solvent (carbon tetrachloride) during the free-radical initiated
NASA Technical Reports Server (NTRS)
Salmon, Ellen
1996-01-01
The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.
NASA Technical Reports Server (NTRS)
Gillian, Ronnie E.; Lotts, Christine G.
1988-01-01
The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.
The TESS Science Processing Operations Center
NASA Technical Reports Server (NTRS)
Jenkins, Jon; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd;
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth’s closest cousins starting in late 2017. TESS will discover approx.1,000 small planets and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NAS Pleiades supercomputer. The SPOC will search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes.
Mass Storage System Upgrades at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty
2000-01-01
The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.
Pittsburgh-Tuskegee Prostate Training Program
2013-05-01
February 2012, Tuskegee University sophomore trainees will be selected as “Prostate Cancer Scholars” for summer internship at the University of...selected as “Prostate Cancer Scholars” for summer internship at the University of Pittsburgh. This is being reported on as the second group of... internship at the University of Pittsburgh. February – April 2012, Trainees will be selectively paired with University of Pittsburgh Faculty
Thermomechanical Transitions in Polyphosphazenes.
1980-08-08
ADAOB8 119 PITTSBURGH UNIV PA DEPT OF METALLURGICAL AND MATERI --ETC F/6 11/9 THERMOMECHANICAL TRANSITIONS IN POLYPHOSPHAZENES.(U) AUG B0 I C CHOY, J...Metallurgical and Materials Engr._____ University of Pittsburgh METALLURGICAL AND MATERIALS ENGINEERING University of Pittsburgh Pittsburgh, Pennsylvania...15261 _ _ _ _ ,80 818 05, THERMOMECHANICAL TRANSITIONS IN POLYPHOSPHAZENES I. C. Choy and J. H. Magill Dept. of Metallurgical and Materials Engr
Improved Access to Supercomputers Boosts Chemical Applications.
ERIC Educational Resources Information Center
Borman, Stu
1989-01-01
Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)
Using Queue Time Predictions for Processor Allocation
1997-01-01
Diego Supercomputer Center, 1996. 19 [15] Vijay K. Naik, Sanjeev K. Setia , and Mark S. Squillante. Performance analysis of job schedul- ing policies in...Processing, pages 101{111, 1995. [19] Sanjeev K. Setia and Satish K. Tripathi. An analysis of several processor partitioning policies for parallel...computers. Technical Report CS-TR-2684, University of Maryland, May 1991. [20] Sanjeev K. Setia and Satish K. Tripathi. A comparative analysis of static
Towards Efficient Supercomputing: Searching for the Right Efficiency Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W
2012-01-01
The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has beenmore » and how it stands today with respect to efficient supercomputing.« less
Monitoring volcanic threats using ASTER satellite data
Duda, K.A.; Wessels, R.; Ramsey, M.; Dehn, J.
2008-01-01
This document summarizes ongoing activities associated with a research project funded by the National Aeronautics and Space Administration (NASA) focusing on volcanic change detection through the use of satellite imagery. This work includes systems development as well as improvements in data analysis methods. Participating organizations include the NASA Land Processes Distributed Active Archive Center (LP DAAC) at the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS), the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Science Team, the Alaska Volcano Observatory (AVO) at the USGS Alaska Science Center, the Jet Propulsion Laboratory/California Institute of Technology (JPL/CalTech), the University of Pittsburgh, and the University of Alaska Fairbanks. ?? 2007 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santore, R.R.; Friedman, S.; Reiss, J.
1993-12-31
Since its beginning, Pittsburgh Energy Technology Center`s (PETC) primary function has been to study and conduct research on coal and its uses and applications. PETC has also been investigating ways in which natural gas can be employed to enhance the use of coal and to convert natural gas into liquid products that can be more readily transported and stored. This review contains five articles which reflect PETC`s mission: State-of-the-Art High Performance Power Systems [HIPPS]; Unconventional Fuel Uses of Natural Gas; Micronized Magnetite -- Beneficiation and Benefits; Reburning for NO{sub x} Reduction; and An Update of PETC`s Process Research Facility.
A restored NACA P-51D Mustang in flight
2000-09-15
Bill Allmon of Las Vegas, Nevada, brought his restored NACA P-51D to a reunion of former NACA employees at the NASA Dryden Flight Research Center located at Edwards Air Force Base, Calif., on Sept. 15, 2000. Allmon's award-winning restoration is a genuine former NACA testbed that saw service at the Langley Research Center in Virginia in the late 1940s. Later this Mustang was put on outdoor static display as an Air National Guard monument in Pittsburgh, Pa., where exposure to the elements ravaged its metal structure, necessitating an extensive four-year rebuild.
NASA's supercomputing experience
NASA Technical Reports Server (NTRS)
Bailey, F. Ron
1990-01-01
A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.
OpenMP Performance on the Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Haoqiang, Jin; Hood, Robert
2005-01-01
This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.
1988-01-01
by J.S. are d wether Bruner , J J Goodnow, and G.A. Austin. 1956. New York. Wiley. Copyright 1966 by Jerome Bruner . Reprinted ive.T by permission. ,ts...Brooks-(1978) compared a -condition in which sequence:-once discovered , the formula can be form e. The subjects were asked to learn names for...Mellon University Learning Research and Development Center University of Pittsburgh r- DTIC-i ELECTE MAR 13 1990 Approved for public release
Evans, Melanie
2007-10-29
Hospitals enjoyed a surge in profits last year, reporting an aggregate profit margin of 6%. Executives at financially strong systems credit long-term efforts to improve performance for the results. Elizabeth Concordia, left, of the University of Pittsburgh Medical Center system, says its efforts stressed ongoing consolidation and integration to wipe out waste and errors.
Zang, TT; Tamimi, N; Haddad, FS
2013-01-01
Our aim was to study the role of the Ottawa and Pittsburgh rules to reduce the unnecessary use of radiographs following knee injury. We prospectively reviewed 106 patients who were referred to our clinic over a 3-month period. The Ottawa and Pittsburgh rules were applied to individual patients to evaluate the need for radiography. One hundred and one patients (95%) had radiography of their knee. Five patients (5%) had a fracture of their knee and in all cases, the Ottawa and Pittsburgh knee rules were fulfilled. Using the Ottawa rules, 27 radiographs (25%) could have been avoided without missing a fracture. Using the Pittsburgh rules, 32 radiographs (30%) could have been avoided. The Ottawa and Pittsburgh rules have a high sensitivity for the detection of knee fractures. Their use can aid efficient clinical evaluation without adverse clinical outcome and may reduce healthcare costs. PMID:23827289
Konan, S; Zang, T T; Tamimi, N; Haddad, F S
2013-04-01
Our aim was to study the role of the Ottawa and Pittsburgh rules to reduce the unnecessary use of radiographs following knee injury. We prospectively reviewed 106 patients who were referred to our clinic over a 3-month period. The Ottawa and Pittsburgh rules were applied to individual patients to evaluate the need for radiography. One hundred and one patients (95%) had radiography of their knee. Five patients (5%) had a fracture of their knee and in all cases, the Ottawa and Pittsburgh knee rules were fulfilled. Using the Ottawa rules, 27 radiographs (25%) could have been avoided without missing a fracture. Using the Pittsburgh rules, 32 radiographs (30%) could have been avoided. The Ottawa and Pittsburgh rules have a high sensitivity for the detection of knee fractures. Their use can aid efficient clinical evaluation without adverse clinical outcome and may reduce healthcare costs.
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Reale, O.; Chern, J.-D.; Li, S.-J.; Lee, T.; Chang, J.; Henze, C.; Yeh, K.-S.
2006-01-01
It is known that the General Circulation Models (GCMs) have sufficient resolution to accurately simulate hurricane near-eye structure and intensity. To overcome this limitation, the mesoscale-resolving finite-element GCM (fvGCM) has been experimentally deployed on the NASA Columbia supercomputer, and its performance is evaluated choosing hurricane Katrina as an example in this study. On late August 2005 Katrina underwent two stages of rapid intensification and became the sixth most intense hurricane in the Atlantic. Six 5-day simulations of Katrina at both 0.25 deg and 0.125 deg show comparable track forecasts, but the 0,125 deg runs provide much better intensity forecasts, producing center pressure with errors of only +/- 12 hPa. The 0.125 deg simulates better near-eye wind distributions and a more realistic average intensification rate. A convection parameterization (CP) is one of the major limitations in a GCM, the 0.125 deg run with CP disabled produces very encouraging results.
Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert
2006-01-01
It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.
Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide
NASA Astrophysics Data System (ADS)
Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.
Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
Network issues for large mass storage requirements
NASA Technical Reports Server (NTRS)
Perdue, James
1992-01-01
File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerfler, Douglas; Austin, Brian; Cook, Brandon
There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL,more » such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.« less
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Chern, J.-D.; Reale, O.; Lin, S.-J.; Lee, T.; Chang, J.
2005-01-01
The NASA Columbia supercomputer was ranked second on the TOP500 List in November, 2004. Such a quantum jump in computing power provides unprecedented opportunities to conduct ultra-high resolution simulations with the finite-volume General Circulation Model (fvGCM). During 2004, the model was run in realtime experimentally at 0.25 degree resolution producing remarkable hurricane forecasts [Atlas et al., 2005]. In 2005, the horizontal resolution was further doubled, which makes the fvGCM comparable to the first mesoscale resolving General Circulation Model at the Earth Simulator Center [Ohfuchi et al., 2004]. Nine 5-day 0.125 degree simulations of three hurricanes in 2004 are presented first for model validation. Then it is shown how the model can simulate the formation of the Catalina eddies and Hawaiian lee vortices, which are generated by the interaction of the synoptic-scale flow with surface forcing, and have never been reproduced in a GCM before.)
Integrated risk/cost planning models for the US Air Traffic system
NASA Technical Reports Server (NTRS)
Mulvey, J. M.; Zenios, S. A.
1985-01-01
A prototype network planning model for the U.S. Air Traffic control system is described. The model encompasses the dual objectives of managing collision risks and transportation costs where traffic flows can be related to these objectives. The underlying structure is a network graph with nonseparable convex costs; the model is solved efficiently by capitalizing on its intrinsic characteristics. Two specialized algorithms for solving the resulting problems are described: (1) truncated Newton, and (2) simplicial decomposition. The feasibility of the approach is demonstrated using data collected from a control center in the Midwest. Computational results with different computer systems are presented, including a vector supercomputer (CRAY-XMP). The risk/cost model has two primary uses: (1) as a strategic planning tool using aggregate flight information, and (2) as an integrated operational system for forecasting congestion and monitoring (controlling) flow throughout the U.S. In the latter case, access to a supercomputer is required due to the model's enormous size.
The Pan-American Center for the WMO Sand and Dust Storm Warning Advisory and Assessment System
NASA Astrophysics Data System (ADS)
Sprigg, W. A.
2013-05-01
A World Meteorological Organization system has been established to coordinate knowledge, data, and information concerning airborne dust, the environmental conditions and storms that generate it, the consequences of it, and the means to mitigate and cope with it. Three nodes, or foci, of collaboration cover the globe: for Asia, administered from the China Meteorological Administration in Beijing; for Africa, Europe and the Middle East, administered from the Barcelona Supercomputing Center; and for Pan-America, administered from Chapman University in Orange, California. Pan-American Center priorities include understanding the sources of windblown dust and particulates, simulating and predicting dust events, and serving the health, safety and environmental communities that may benefit from the WMO system.
Supercomputer networking for space science applications
NASA Technical Reports Server (NTRS)
Edelson, B. I.
1992-01-01
The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.
Sesquinaries, Magnetics and Atmospheres: Studies of the Terrestrial Moons and Exoplanets
2016-12-01
support provided by Red Sky Research, LLC. Computational support was provided by the NASA Ames Mission Design Division (Code RD) for research...Systems Branch (Code SST), NASA Ames Research Center, provided supercomputer access and computational resources for the work in Chapter 5. I owe a...huge debt of gratitude to Dr. Pete Worden, Dr. Steve Zornetzer, Dr. Alan Weston ( NASA ), and Col. Carol Welsch, Lt. Col Joe Nance and Lt. Col Brian
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Most Social Scientists Shun Free Use of Supercomputers.
ERIC Educational Resources Information Center
Kiernan, Vincent
1998-01-01
Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…
A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery
NASA Technical Reports Server (NTRS)
Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.
2000-01-01
The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.
Municipal Fleet Vehicle Electrification and Photovoltaic Power In the City of Pittsburgh.
DOT National Transportation Integrated Search
2016-01-01
This document reports the results of a cost benefit analysis on potential photovoltaic projects : in Pittsburgh and electrifying the citys light duty civilian vehicle fleet. Currently the : city of Pittsburgh has a civilian passenger vehicle fleet...
Mashima, Jun; Kodama, Yuichi; Fujisawa, Takatomo; Katayama, Toshiaki; Okuda, Yoshihiro; Kaminuma, Eli; Ogasawara, Osamu; Okubo, Kousaku; Nakamura, Yasukazu; Takagi, Toshihisa
2017-01-01
The DNA Data Bank of Japan (DDBJ) (http://www.ddbj.nig.ac.jp) has been providing public data services for thirty years (since 1987). We are collecting nucleotide sequence data from researchers as a member of the International Nucleotide Sequence Database Collaboration (INSDC, http://www.insdc.org), in collaboration with the US National Center for Biotechnology Information (NCBI) and European Bioinformatics Institute (EBI). The DDBJ Center also services Japanese Genotype-phenotype Archive (JGA), with the National Bioscience Database Center to collect human-subjected data from Japanese researchers. Here, we report our database activities for INSDC and JGA over the past year, and introduce retrieval and analytical services running on our supercomputer system and their recent modifications. Furthermore, with the Database Center for Life Science, the DDBJ Center improves semantic web technologies to integrate and to share biological data, for providing the RDF version of the sequence data. PMID:27924010
Finn, Olivera J; Salter, Russell D
2006-01-01
The University of Pittsburgh School of Medicine has a long tradition of excellence in immunology research and training. Faculty, students, and postdoctoral fellows walk through hallways that are pictorial reminders of the days when Dr. Jonas Salk worked here to develop the polio vaccine, or when Dr. Niels Jerne chaired the Microbiology Department and worked on perfecting the Jerne Plaque Assay for antibody-producing cells. Colleagues and postdoctoral fellows of Professor Salk are still on the faculty of the University of Pittsburgh Medical School as are graduate students of Professor Jerne. A modern research building, the 17 story high Biomedical Science Tower, is a vivid reminder of the day when Dr. Thomas Starzl arrived in Pittsburgh and started building the most prominent solid-organ-transplant program in the world. The immunology research that developed around the problem of graft rejection and tolerance induction trained numerous outstanding students and fellows. Almost 20 yr ago, the University of Pittsburgh founded the University of Pittsburgh Cancer Institute (UPCI) with the renowned immunologist Dr. Ronald Herberman at its helm. This started a number of new research initiatives in cancer immunology and immunotherapy. A large number of outstanding young investigators, as well as several well-established tumor immunologists, were recruited to Pittsburgh at that time.
Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J
2015-09-22
Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.
Charliecloud: Unprivileged containers for user-defined software stacks in HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Timothy C.
Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining accessmore » to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.« less
Resource based view of the firm: measures of reputation among health service-sector businesses.
Smith, Alan D
2008-01-01
Application of the strategic leverage of Resource Based View of the Firm (RBV) directly advocates that a company's competitive advantage is derived from its ability to assemble and exploit an appropriate combination of resources (both tangible and intangible assets). The three companies that were selected were Pittsburgh-based companies that were within relatively easy access, representing healthcare service-related industries, and can be reviewed for the principles of the RBV. The particular firms represented a variety of establishments and included Baptist Homes (a long-term care facility), University of Pittsburgh Medical Center (UPMC)(a provider of hospital and other health services), and GlaxoSmithKline, Consumer Healthcare, North America (GSK-CHNA)(a global provider of healthcare products and services). Through the case studies, it was found that not all intangible assets are strategic, and by extension, not all measures of reputation are strategic either. For an intangible asset to be considered strategic, in this case reputation, it must be valuable, rare, imperfectly imitable, and non-substitutable.
Detecting Underground Mine Voids Using Complex Geophysical Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaminski, V. F.; Harbert, W. P.; Hammack, R. W.
2006-12-01
In July 2006, the National Energy Technology Laboratory in collaboration with Department of Geology and Planetary Science, University of Pittsburgh conducted complex ground geophysical surveys of an area known to be underlain by shallow coal mines. Geophysical methods including electromagnetic induction, DC resistivity and seismic reflection were conducted. The purpose of these surveys was to: 1) verify underground mine voids based on a century-old mine map that showed subsurface mine workings georeferenced to match with present location of geophysical test-site located on the territory of Bruceton research center in Pittsburgh, PA, 2) deliniate mine workings that may be potentially filledmore » with electrically conductive water filtrate emerging from adjacent groundwater collectors and 3) establish an equipment calibration site for geophysical instruments. Data from electromagnetic and resistivity surveys were further processed and inverted using EM1DFM, EMIGMA or Earthimager 2D capablilities in order to generate conductivity/depth images. Anomaly maps were generated, that revealed the locations of potential mine openings.« less
Distributed user services for supercomputers
NASA Technical Reports Server (NTRS)
Sowizral, Henry A.
1989-01-01
User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.
Learning Theory and the Study of Instruction
1989-02-01
learning theories (e.g., Fitts 1962, Vygotsky 1978), cultural beliefs about learning, and commonsense observations of teaching and tutoring. But...LEARNING THEORY AND THE STUDY OF INSTRUCTION IRobert Glaser Miriam Bassok LEARNING RESEARCH AND DEVELOPMENT CENTER DTIC 7~jahELECTE (%Q)2 3 FEB 1989...University of Pittsburgh _o role=*--,-mW .,N89 2 23 025 a.wtt- &@I =01% N A I LEARNING THEORY AND THE STUDY OF INSTRUCTION Robert Glaser Miriam Bassok
An Internet-style Approach to Managing Wireless Link Errors
2002-05-01
implementation I used. Jamshid Mahdavi and Matt Mathis, then at the Pittsburgh Super- computer Center, and Vern Paxson of the Lawrence Berkeley National...Exposition. IEEE CS Press, 2002. [19] P. Bhagwat, P. Bhattacharya, A. Krishna , and S. Tripathi. Enhancing throughput over wireless LANs using channel...performance over wireless networks at the link layer. ACM Mobile Networks and Applications, 5(1):57– 71, March 2000. [97] Vern Paxson and Mark Allman
1988-06-01
Cortex of the Cat John G. Robson Craik Physiological Laboratory Cambridge University Cambridge, England When tested with spatially-localized stimuli...University, New York, NY Stanley Klein - School of Optometry, University Berkeley, Berkeley, CA Jennifer Knight - Neurobiology & Behavior, Cornell University...Village, Poughkeepsie, NY Jeffrcy Lubin - Psychology Department, University of PA, Philadelphia, PA Jennifer S. Lund - University of Pittsburgh
Improving Memory for Optimization and Learning in Dynamic Environments
2011-07-01
algorithm uses simple, in- cremental clustering to separate solutions into memory entries. The cluster centers are used as the models in the memory. This is...entire days of traffic with realistic traffic de - mands and turning ratios on a 32 intersection network modeled on downtown Pittsburgh, Pennsyl- vania...early/tardy problem. Management Science, 35(2):177–191, 1989. [78] Daniel Parrott and Xiaodong Li. A particle swarm model for tracking multiple peaks in
STARS Conceptual Framework for Reuse Processes (CFRP). Volume 2: application Version 1.0
1993-09-30
Analysis and Design DISA/CIM process x OProcess [DIS93] Feature-Oriented Domain SEI process x Analysis ( FODA ) [KCH+90] JIAWG Object-Oriented Domain JIAWG...Domain Analysis ( FODA ) Feasibility Study. Technical Report CMU/S[1 ,N. I R 21. Soft- ware Engineering Institute, Carnegie Mellon University, Pittsburgh...Electronic Systems Center Air Force Materiel Command, USAF Hanscom AFB, MA 01731-5000 Prepared by: The Boeing Company , IBM, Unisys Corporation, Defense
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-10
... Ozone Standard for the Pittsburgh-Beaver Valley Moderate Nonattainment Area AGENCY: Environmental... independent determinations regarding the Pittsburgh-Beaver Valley 1997 8-hour ozone nonattainment area (the.... Among those nonattainment areas is the Pittsburgh Area, which includes Allegheny, Armstrong, Beaver...
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2000-01-01
The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, A.
1986-03-10
Supercomputing software is moving into high gear, spurred by the rapid spread of supercomputers into new applications. The critical challenge is how to develop tools that will make it easier for programmers to write applications that take advantage of vectorizing in the classical supercomputer and the parallelism that is emerging in supercomputers and minisupercomputers. Writing parallel software is a challenge that every programmer must face because parallel architectures are springing up across the range of computing. Cray is developing a host of tools for programmers. Tools to support multitasking (in supercomputer parlance, multitasking means dividing up a single program tomore » run on multiple processors) are high on Cray's agenda. On tap for multitasking is Premult, dubbed a microtasking tool. As a preprocessor for Cray's CFT77 FORTRAN compiler, Premult will provide fine-grain multitasking.« less
Will Moores law be sufficient?
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik P.
2004-07-01
It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moore's Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPSmore » (1 Zettaflops) . The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moore's Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moore's Law will be very high by today's standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops), depending on architecture, but distinctly below the level required for the most ambitious applications. Having established that Moore's Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moore's Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a 'multi-architecture' for supercomputers that combines a microprocessor with other 'advanced' concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the 'advanced' features would provide a boost to the limits of performance.« less
Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilk, Todd
The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.
Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL
Yilk, Todd
2018-02-17
The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.
Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubois, David H; Dubois, Andrew J; Boorman, Thomas M
2009-01-01
This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.
Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubois, David H; Dubois, Andrew J; Boorman, Thomas M
2009-03-10
This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.
Contract W911NF-09-1-0488 (Rush University Medical Center)
2012-11-23
algorithm. In Proceedings of the 1993 ACM/IEEE Conference on Supercomputing, pages 12�21, New York, 1993. ACM. [8] R. Yokota, T. Hamada, J. P. Bardhan , M...computing gravity anom- alies. Geophysical Journal International, 2011. to appear. [13] R. Yokota, T. Hamada, J. P. Bardhan , M. G. Knepley, and L. A. Barba...extension of the petfmm a fast multipole library. Presentation at WCCM 2010, Sydney Australia, 2010. [15] J. P. Bardhan . Interpreting the Coulomb
State University of New York Institute of Technology (SUNYIT) Summer Scholar Program
2009-10-01
COVERED (From - To) March 2007 – April 2009 4 . TITLE AND SUBTITLE STATE UNIVERSITY OF NEW YORK INSTITUTE OF TECHNOLOGY (SUNYIT) SUMMER SCHOLAR...Even with access to the Arctic Regional Supercomputer Center (ARSC), evolving a 9/7 wavelet with four multi-resolution levels (MRA 4 ) involves...evaluated over the multiple processing elements in the Cell processor. It was tested on Cell processors in a Sony Playstation 3 and on an IBM QS20 blade
Golovin, A V; Smirnov, I V; Stepanova, A V; Zalevskiy, A O; Zlobin, A S; Ponomarenko, N A; Belogurov, A A; Knorre, V D; Hurs, E N; Chatziefthimiou, S D; Wilmanns, M; Blackburn, G M; Khomutov, R M; Gabibov, A G
2017-07-01
It is proposed to perform quantum mechanical/molecular dynamics calculations of chemical reactions that are planned to be catalyzed by antibodies and then conduct a virtual screening of the library of potential antibody mutants to select an optimal biocatalyst. We tested the effectiveness of this approach by the example of hydrolysis of organophosphorus toxicant paraoxon using kinetic approaches and X-ray analysis of the antibody biocatalyst designed de novo.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Information Management and Technology Div.
This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…
Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-02-01
Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.
Berger, Rachel Pardes; Pak, Brian J; Kolesnikova, Mariya D; Fromkin, Janet; Saladino, Richard; Herman, Bruce E; Pierce, Mary Clyde; Englert, David; Smith, Paul T; Kochanek, Patrick M
2017-06-05
Abusive head trauma is the leading cause of death from physical abuse. Missing the diagnosis of abusive head trauma, particularly in its mild form, is common and contributes to increased morbidity and mortality. Serum biomarkers may have potential as quantitative point-of-care screening tools to alert physicians to the possibility of intracranial hemorrhage. To identify and validate a set of biomarkers that could be the basis of a multivariable model to identify intracranial hemorrhage in well-appearing infants using the Ziplex System. Binary logistic regression was used to develop a multivariable model incorporating 3 serum biomarkers (matrix metallopeptidase-9, neuron-specific enolase, and vascular cellular adhesion molecule-1) and 1 clinical variable (total hemoglobin). The model was then prospectively validated. Multiplex biomarker measurements were performed using Flow-Thru microarray technology on the Ziplex System, which has potential as a point-of-care system. The model was tested at 3 pediatric emergency departments in level I pediatric trauma centers (Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Primary Children's Hospital, Salt Lake City, Utah; and Lurie Children's Hospital, Chicago, Illinois) among well-appearing infants who presented for care owing to symptoms that placed them at increased risk of abusive head trauma. The study took place from November 2006 to April 2014 at Children's Hospital of Pittsburgh, June 2010 to August 2013 at Primary Children's Hospital, and January 2011 to August 2013 at Lurie Children's Hospital. A mathematical model that can predict acute intracranial hemorrhage in infants at increased risk of abusive head trauma. The multivariable model, Biomarkers for Infant Brain Injury Score, was applied prospectively to 599 patients. The mean (SD) age was 4.7 (3.1) months. Fifty-two percent were boys, 78% were white, and 8% were Hispanic. At a cutoff of 0.182, the model was 89.3% sensitive (95% CI, 87.7-90.4) and 48.0% specific (95% CI, 47.3-48.9) for acute intracranial hemorrhage. Positive and negative predictive values were 21.3% and 95.6%, respectively. The model was neither sensitive nor specific for atraumatic brain abnormalities, isolated skull fractures, or chronic intracranial hemorrhage. The Biomarkers for Infant Brain Injury Score, a multivariable model using 3 serum biomarker concentrations and serum hemoglobin, can identify infants with acute intracranial hemorrhage. Accurate and timely identification of intracranial hemorrhage in infants without a history of trauma in whom trauma may not be part of the differential diagnosis has the potential to decrease morbidity and mortality from abusive head trauma.
GREEN SUPERCOMPUTING IN A DESKTOP BOX
DOE Office of Scientific and Technical Information (OSTI.GOV)
HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY
2007-01-17
The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less
Input/output behavior of supercomputing applications
NASA Technical Reports Server (NTRS)
Miller, Ethan L.
1991-01-01
The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.
Prehospital Air Medical Plasma (PAMPer) Trial
2015-07-01
Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204...Clifford Calloway MD, PhD, Associate Professor of Emergency Medicine, University of Pittsburgh Mark Yazer, MD, Medical Director Centralized...Transfusion Service; University of Pittsburgh Barbara Early, RN, BSN, CCRC, MACRO CRC Director , University of Pittsburgh C. Investigators at other
Value-Added Models for the Pittsburgh Public Schools
ERIC Educational Resources Information Center
Johnson, Matthew; Lipscomb, Stephen; Gill, Brian; Booker, Kevin; Bruch, Julie
2012-01-01
At the request of Pittsburgh Public Schools (PPS) and the Pittsburgh Federation of Teachers (PFT), Mathematica has developed value-added models (VAMs) that aim to estimate the contributions of individual teachers, teams of teachers, and schools to the achievement growth of their students. The authors' work in estimating value-added in Pittsburgh…
75 FR 56866 - Special Local Regulation; Monongahela River, Pittsburgh, PA
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-17
... participants of the Pittsburgh Dragon Boat Festival from the hazards imposed by marine traffic. Entry into the... Pittsburgh Dragon Boat Festival from the hazards imposed by marine traffic, and re-scheduling the event is... Festival from the hazards imposed by marine traffic, and re-scheduling the event is contrary to the public...
Proceedings of the CASE Adoption Workshop Held in Pittsburgh, Pennsylvania on 13-14 November 1990
1992-05-01
position. It Is published in the interest of scientific and technical information exchange. Review and Approval This report has been reviewed and is...available through the Defense Technical Information Center. DTIC provides access so and transr of scientific and technical information for DoD... Metthods and Tools $1,850 P-Cube Corporation 572 East Lamibert Rd CASEbase (a PC-based CASE database) $495 Brea, CA 92621 (714) 990-3169 Foresite
Probabilistic, Decision-theoretic Disease Surveillance and Control
Wagner, Michael; Tsui, Fuchiang; Cooper, Gregory; Espino, Jeremy U.; Harkema, Hendrik; Levander, John; Villamarin, Ricardo; Voorhees, Ronald; Millett, Nicholas; Keane, Christopher; Dey, Anind; Razdan, Manik; Hu, Yang; Tsai, Ming; Brown, Shawn; Lee, Bruce Y.; Gallagher, Anthony; Potter, Margaret
2011-01-01
The Pittsburgh Center of Excellence in Public Health Informatics has developed a probabilistic, decision-theoretic system for disease surveillance and control for use in Allegheny County, PA and later in Tarrant County, TX. This paper describes the software components of the system and its knowledge bases. The paper uses influenza surveillance to illustrate how the software components transform data collected by the healthcare system into population level analyses and decision analyses of potential outbreak-control measures. PMID:23569617
The center for causal discovery of biomedical knowledge from big data
Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard
2015-01-01
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. PMID:26138794
Multi-Model Validation in the Chesapeake Bay Region During Frontier Sentinel 2010
2012-09-28
which a 72-hr forecast took approximately 1 hr. Identical runs were performed on the DoD Supercomputing Resources Center (DSRC) host “ DaVinci ” at the...performance Navy DSRC host DaVinci . Products of water level and horizontal current maps as well as station time series, identical to those produced by the...forecast meteorological fields. The NCOM simulations were run daily on 128 CPUs at the Navy DSRC host DaVinci and required approximately 5 hrs of wall
2012-02-16
Snapshot from a simulation run on the Pleiades supercomputer. It depicts a fluctuating pressure field on aircraft nose landing gear and fuselage surfaces. The simulation helped scientists better understand the effects of landing gear and acoustic noise. The goal of the study was to improve the current understanding of aircraft nose landing gear noise, which will lead to quieter, more efficient airframe components for future aircraft designs. The visualization was produced with help from the NAS Data Analysis & Visualization group. Investigator: Mehdi Khorrami, NASA Langley Research Center.
Building Columbia from the SysAdmin View
NASA Technical Reports Server (NTRS)
Chan, David
2005-01-01
Project Columbia was built at NASA Ames Research Center in partnership with SGI and Intel. Columbia consists of 20 512 processor Altix machines with 440TB of storage and achieved 51.87 TeraPlops to be ranked the second fastest on the top 500 at SuperComputing 2004. Columbia was delivered, installed and put into production in 3 months. On average, a new Columbia node was brought into production in less than a week. Columbia's configuration, installation, and future plans will be discussed.
Searching for Baryon Acoustic Oscillations in Intergalactic Absorption: The Expanding Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This simulation follows the growth of density perturbations in both gas and dark matter components in a volume 1 billion light years on a side beginning shortly after the Big Bang and evolved to half the present age of the universe. Credits: Science: Michael L. Norman, Robert Harkness, Pascal Paschos, Rick Wagner, San Diego Supercomputer Center/University of California, San Diego Visualization: Mark Hereld, Joseph A. Insley, Michael E. Papka, Argonne National Laboratory; Eric C. Olson, University of Chicago
The p- and h-p Versions of the Finite Element Method: An Overview
1989-05-01
and h-p versions was studied in detail for 1 dimension in [48] and for 2 dimensions in [491. Let us mention some one dimensional results. We consider...Supercomputing in the Automotive Industry, Oct. 25-28 1988, Seville Spain, Report of Noetic Tech., St. Louis, NO 63117. 70. H. Vogelius, An analysis of the p...collaboration with govern- ment agencies such as the National Bureau of Standards. " To be an international center of study and research for foreign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Zhaojun; Scalettar, Richard; Savrasov, Sergey
This report summarizes the accomplishments of the University of California Davis team which is part of a larger SciDAC collaboration including Mark Jarrell of Louisiana State University, Karen Tomko of the Ohio Supercomputer Center, and Eduardo F. D'Azevedo and Thomas A. Maier of Oak Ridge National Laboratory. In this report, we focus on the major UCD accomplishments. As the paper authorship list emphasizes, much of our work is the result of a tightly integrated effort; hence this compendium of UCD efforts of necessity contains some overlap with the work at our partner institutions.
Prospects for Boiling of Subcooled Dielectric Liquids for Supercomputer Cooling
NASA Astrophysics Data System (ADS)
Zeigarnik, Yu. A.; Vasil'ev, N. V.; Druzhinin, E. A.; Kalmykov, I. V.; Kosoi, A. S.; Khodakov, K. A.
2018-02-01
It is shown experimentally that using forced-convection boiling of dielectric coolants of the Novec 649 Refrigerant subcooled relative to the saturation temperature makes possible removing heat flow rates up to 100 W/cm2 from modern supercomputer chip interface. This fact creates prerequisites for the application of dielectric liquids in cooling systems of modern supercomputers with increased requirements for their operating reliability.
Pittsburgh and the Arts or How My Eye Was Formed.
ERIC Educational Resources Information Center
Roschwalb, Susanne A.
The way the author's experiences of the city of Pittsburgh (Pennsylvania) shaped her visual literacy are explored. Along with the imagery of the steel mills, she experienced some artistic opportunities that helped shape the foundation of her life in art. Although no American city was as extensively industrialized as Pittsburgh, it was the artistic…
ERIC Educational Resources Information Center
Hamill, Sean D.
2011-01-01
This paper documents Pittsburgh's transformation from a typical, adversarial district-union dynamic to one of deep, substantive collaboration over the course of several years. This work has catapulted Pittsburgh to the vanguard of efforts to improve teacher effectiveness, and helped secure more than $80 million in philanthropic and federal grants.…
The Impact of The University of Pittsburgh on the Local Economy.
ERIC Educational Resources Information Center
Pittsburgh Univ., PA. University Urban Interface Program.
One of the projects selected for the University Urban Interface Program at the University of Pittsburgh was that of studying the impact of the university on the city of Pittsburgh. In pursuing this goal, studies were made of university-related local business volume; value of local business property committed to university-related business; credit…
ERIC Educational Resources Information Center
Hamilton, Laura S.; Engberg, John; Steiner, Elizabeth D.; Nelson, Catherine Awsumb; Yuan, Kun
2012-01-01
In 2007, the Pittsburgh Public Schools (PPS) received funding from the U.S. Department of Education's Teacher Incentive Fund (TIF) program to implement the Pittsburgh Urban Leadership System for Excellence (PULSE), a set of reforms designed to improve the quality of school leadership throughout the district. A major component of PULSE is the…
ERIC Educational Resources Information Center
Lipscomb, Stephen; Gill, Brian; Booker, Kevin; Johnson, Matthew
2010-01-01
At the request of Pittsburgh Public Schools (PPS) and the Pittsburgh Federation of Teachers (PFT), Mathematica is developing value-added models (VAMs) that aim to estimate the contributions of individual teachers, teams of teachers, and schools to the achievement growth of their students. The analyses described in this report are intended as an…
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.
2006-01-01
Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.
Large-Scale NASA Science Applications on the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Brooks, Walter
2005-01-01
Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.
PREFACE: WMO/GEO Expert Meeting On An International Sand And Dust Storm Warning System
NASA Astrophysics Data System (ADS)
Pérez, C.; Baldasano, J. M.
2009-03-01
This volume of IOP Conference Series: Earth and Environmental Science presents a selection of papers that were given at the WMO/GEO Expert Meeting on an International Sand and Dust Storm Warning System hosted by the Barcelona Supercomputing Center - Centro Nacional de Supercomputación in Barcelona (Spain) on 7-9 November 2007 (http://www.bsc.es/wmo). A sand and dust storm (SDS) is a meteorological phenomenon common in arid and semi-arid regions and arises when a gust front passes or when the wind force exceeds the threshold value where loose sand and dust are removed from the dry surface. After aeolian uptake, SDS reduce visibility to a few meters in and near source regions, and dust plumes are transported over distances as long as thousands of kilometres. Aeolian dust is unique among aerosol phenomena: (1) with the possible exception of sea-salt aerosol, it is globally the most abundant of all aerosol species, (2) it appears as the dominating component of atmospheric aerosol over large areas of the Earth, (3) it represents a serious hazard for life, health, property, environment and economy (occasionally reaching the grade of disaster or catastrophic event) and (4) its influence, impacts, complex interactions and feedbacks within the Earth System span a wide range of spatial and temporal scales. From a political and societal point of view, the concern for SDS and the need for international cooperation were reflected after a survey conducted in 2005 by the World Meteorological Organization (WMO) in which more than forty WMO Member countries expressed their interest for creating or improving capacities for SDS warning advisory and assessment. In this context, recent major advances in research - including, for example, the development and implementation of advanced observing systems, the theoretical understanding of the mechanisms responsible for sand and dust storm generation and the development of global and regional dust models - represent the basis for developing applications focusing on societal benefit and risk reduction. However, at present there are interdisciplinary research challenges to overwhelm current uncertainties in order to reach full potential. Furthermore, the community of practice for SDS observations, forecasts and analyses is mainly scientifically based and rather disconnected from potential users. This requires the development of interfaces with operational communities at international and national levels, strongly focusing on the needs of people and factors at risk. The WMO has taken the lead with international partners to develop and implement a Sand and Dust Storm Warning Advisory and Assessment System (SDS-WAS). The history of the WMO SDS-WAS development is as follows. On 12-14 September 2004, an International Symposium on Sand and Dust Storms was held in Beijing at the China Meteorological Agency followed by a WMO Experts Workshop on Sand and Dust Storms. The recommendations of that workshop led to a proposal to create a WMO Sand and Dust Storm Project coordinated jointly with the Global Atmosphere Watch (GAW). This was approved by the steering body of the World Weather Research Programme (WWRP) in 2005. Responding to a WMO survey conducted in 2005, more than forty WMO Member countries expressed interest in participating in activities to improve capacities for more reliable sand and dust storm monitoring, forecasting and assessment. On 31 October to 1 November 2006 in Shanghai, the steering committee of the Sand and Dust Storm Project proposed the development and implementation of a Sand and Dust Storm Warning, Advisory and Assessment System (SDS-WAS). The WMO Secretariat in Geneva formed an ad-hoc Internal Group on SDS-WAS consisting of scientific officers representing WMO research, observations, operational prediction, service delivery and applications programmes such as aviation and agriculture. In May 2007, the 14th WMO Congress endorsed the launching of the SDS-WAS. It also welcomed the strong support of Spain to host a regional centre for the European/African/Middle East node of SDS-WAS and to play a lead role in implementation. In August 2007, the Korean Meteorological Administration hosted the 2nd International Workshop on Sand and Dust Storms highlighting Korean SDS-WAS activities as well as those of Asian regional partners. From 7-9 November 2007, Spain hosted the WMO/GEO Expert Meeting on SDS-WAS at the Barcelona Supercomputing Center. This consultation meeting brought 100 international experts together from research, observation, forecasting and user countries especially in Africa and the Middle East to discuss the way forward in SDS-WAS implementation. The general objective of the WMO/GEO Expert Meeting on an International Sand and Dust Storm Warning System was to discuss and recommend actions needed to develop a global routine SDS-WAS based on integrating numerical SDS prediction and observing systems, and on establishing effective cooperation between data producers and user communities in order to provide SDS-WAS products capable of contributing to the reduction of risks from SDS. The specific objectives were: to identify, present and suggest future real-time observations for forecast verification and dust surveillance: satellite, ground-based remote sensing (passive and active) and in-situ monitoring to present ongoing forecasting activities to discuss and identify user needs: health, air quality, air transport operations, ocean, and others to identify and discuss dust research issues relevant for operational forecast applications to present the concept of SDS-WAS and Regional Centers The meeting was organised around invited presentations and discussions on observations, modelling and users of the SDS-WAS. C Pérez and J M Baldasano Editors INTERNATIONAL STEERING COMMITTEE José María Baldasano (Chairman) - Barcelona Supercomputing Center, Spain Emilio Cuevas - Instituto Nacional de Meteorología, Spain Leonard A Barrie - World Meteorological Organisation, Switzerland Young J Kim - Gwangju Institute of Science and Technology, Korea Menas Kafatos - George Mason University, USA Xiaoye Zhang - Chinese Meteorology Administration, China Slobodan Nickovic - World Meteorological Organisation, Switzerland Carlos Pérez - Barcelona Supercomputing Center, Spain William A Sprigg - University of Arizona, USA Stéphane Alfaro - Université de Paris Val de Marne, France Ina Tegen - Leibniz Institute for Tropospheric Research, Germany Mohamed Mahmoud Eissa - Under-secretary of State for Researches, Egypt Sunling Gong - Environment Canada, Canada Emily Firth - GEO Secretariat, Switzerland LOCAL ORGANISING COMMITTEE José María Baldasano - Barcelona Supercomputing Center, Spain Carlos Pérez - Barcelona Supercomputing Center, Spain Renata Giménez - Barcelona Supercomputing Center, Spain Emilio Cuevas - Instituto Nacional de Meteorología, Spain Slobodan Nickovic - World Meteorological Organisation, Switzerland J M Marcos - Instituto Nacional de Meteorología, Spain Manuel Palomares - Instituto Nacional de Meteorología, Spain Xavier Querol - Consejo Superior de Investigaciones Científicas, Spain Conference photograph
Space Radar Image of Mammoth Mountain, California
1999-05-01
This false-color composite radar image of the Mammoth Mountain area in the Sierra Nevada Mountains, California, was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 67th orbit on October 3, 1994. The image is centered at 37.6 degrees north latitude and 119.0 degrees west longitude. The area is about 39 kilometers by 51 kilometers (24 miles by 31 miles). North is toward the bottom, about 45 degrees to the right. In this image, red was created using L-band (horizontally transmitted/vertically received) polarization data; green was created using C-band (horizontally transmitted/vertically received) polarization data; and blue was created using C-band (horizontally transmitted and received) polarization data. Crawley Lake appears dark at the center left of the image, just above or south of Long Valley. The Mammoth Mountain ski area is visible at the top right of the scene. The red areas correspond to forests, the dark blue areas are bare surfaces and the green areas are short vegetation, mainly brush. The purple areas at the higher elevations in the upper part of the scene are discontinuous patches of snow cover from a September 28 storm. New, very thin snow was falling before and during the second space shuttle pass. In parallel with the operational SIR-C data processing, an experimental effort is being conducted to test SAR data processing using the Jet Propulsion Laboratory's massively parallel supercomputing facility, centered around the Cray Research T3D. These experiments will assess the abilities of large supercomputers to produce high throughput Synthetic Aperture Radar processing in preparation for upcoming data-intensive SAR missions. The image released here was produced as part of this experimental effort. http://photojournal.jpl.nasa.gov/catalog/PIA01746
National Test Facility civilian agency use of supercomputers not feasible
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-12-01
Based on interviews with civilian agencies cited in the House report (DOE, DoEd, HHS, FEMA, NOAA), none would be able to make effective use of NTF`s excess supercomputing capabilities. These agencies stated they could not use the resources primarily because (1) NTF`s supercomputers are older machines whose performance and costs cannot match those of more advanced computers available from other sources and (2) some agencies have not yet developed applications requiring supercomputer capabilities or do not have funding to support such activities. In addition, future support for the hardware and software at NTF is uncertain, making any investment by anmore » outside user risky.« less
Multiple DNA and protein sequence alignment on a workstation and a supercomputer.
Tajima, K
1988-11-01
This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.
Seismic signal processing on heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas
2015-04-01
The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.
NASA Technical Reports Server (NTRS)
Kutler, Paul; Yee, Helen
1987-01-01
Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.
Prioritizing Play and Becoming the Best Place on Earth to Be a Kid
ERIC Educational Resources Information Center
Mediate, Heather N.
2011-01-01
The city of Pittsburgh, Pennsylvania, doesn't need national accolades to prove that it is a great place to live, work, and raise families, but it has them. In 2010, "Parents Magazine" ranked Pittsburgh fourth among the top 100 cities for raising babies and Forbes.com ranked the Pittsburgh metropolitan area the number one most livable…
ERIC Educational Resources Information Center
Tharp-Taylor, Shannah; Nelson, Catherine Awsumb; Dembosky, Jacob W.; Gill, Brian
2007-01-01
The Pittsburgh Public School District asked the RAND Corporation to monitor the first year's implementation (2006-2007) of Excellence for All (EFA) and provide feedback to district staff, the board, and other stakeholders. The Pittsburgh Public School District leadership developed EFA with the aim of increasing student achievement by improving…
Knight, William D; Okello, Aren A; Ryan, Natalie S; Turkheimer, Federico E; Rodríguez Martinez de Llano, Sofia; Edison, Paul; Douglas, Jane; Fox, Nick C; Brooks, David J; Rossor, Martin N
2011-01-01
(11)Carbon-Pittsburgh compound B positron emission tomography studies have suggested early and prominent amyloid deposition in the striatum in presenilin 1 mutation carriers. This cross-sectional study examines the (11)Carbon-Pittsburgh compound B positron emission tomography imaging profiles of presymptomatic and mildly affected (mini-mental state examination ≥ 20) carriers of seven presenilin 1 mutations, comparing them with groups of controls and symptomatic sporadic Alzheimer's disease cases. Parametric ratio images representing (11)Carbon-Pittsburgh compound B retention from 60 to 90 min were created using the pons as a reference region and nine regions of interest were studied. We confirmed that increased amyloid load may be detected in presymptomatic presenilin 1 mutation carriers with (11)Carbon-Pittsburgh compound B positron emission tomography and that the pattern of retention is heterogeneous. Comparison of presenilin 1 and sporadic Alzheimer's disease groups revealed significantly greater thalamic retention in the presenilin 1 group and significantly greater frontotemporal retention in the sporadic Alzheimer's disease group. A few individuals with presenilin 1 mutations showed increased cerebellar (11)Carbon-Pittsburgh compound B retention suggesting that this region may not be as suitable a reference region in familial Alzheimer's disease.
An Application of the Perron-Frobenius Theorem to a Damage Model Problem.
1985-04-01
RO-RI6I 20B AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A ill I DAMAGOE MODEL PR BLEM.. (U) PITTSBURGH UNIV PA CENTER FOR I MULTIYARIATE...any copyright notation herein. * . .r * j * :h ~ ** . . .~. ~ % *~’ :. ~ ~ v 4 .% % %~ AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A DAMAGE...University of Sheffield, U.K. S ~ Summry Using the Perron - Frobenius theorem, it is established that if’ (X,Y) is a random vector of non-negative
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
Work closely with the business office.
2011-05-01
At the University of Pittsburgh Medical Center, members of the case management department work closely with the contracting and business department by pointing out payer issues and keeping the chief financial officer informed about payer requirements that could affect reimbursement. Case managers track payer issues on a dayto-day basis and report trends to the contracting department. Contracting staff obtain input from members of the case management department when negotiating or renegotiating contracts. Changes in payer contracts are communicated to the case management staff.
NASA Astrophysics Data System (ADS)
Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.
2015-12-01
The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of streamflow regulation.
Lee, Ching-Yi; Chen, Hsi-Chung; Meg Tseng, Mei-Chih; Lee, Hsin-Chien; Huang, Lian-Hua
2015-09-01
Shift work is a prominent feature of most nursing jobs. Although chronotype, emotional disturbance, and insomnia vulnerability are important factors for patients with insomnia in general, their effects on shift nurses are unknown. This study explores the relationships between the sleep quality of shift nurses and the variables of chronotype, emotional disturbance, and insomnia vulnerability. A survey was conducted with 398 shift nurses in a medical center. Chronotype, emotional disturbance, insomnia vulnerability, and sleep quality were evaluated using the Smith Morningness-Eveningness Questionnaire, the Brief Symptom Rating Scale, the Ford Insomnia Response to Stress Test, and the Pittsburgh Sleep Quality Index, respectively. On the Pittsburgh Sleep Quality Index, 70.1% of the participants scored higher than 5. Multiple regression analysis revealed that, together with night shift work (b [SE] = 1.05 [0.35], p = .003), higher levels of emotional disturbance (b [SE] = 0.30 [0.05], p < .001) and higher insomnia vulnerability (b [SE] = 0.18 [0.03], p < .001) were predictors of poor sleep quality and that chronotype was not a predictor of poor sleep quality. The multiple mediator model indicated that emotional disturbance significantly mediated an indirect effect of evening chronotype preference on poor subjective sleep quality (one subscale of the Pittsburgh Sleep Quality Index). In addition to shift patterns, emotional disturbance and high insomnia vulnerability are factors that may be used to identify shift nurses who face a higher risk of sleep disturbance. Because evening chronotype may indirectly influence subjective sleep quality through the pathway of emotional disturbance, further research into the mechanism that underlies this pathway is warranted.
Stall, Ron; Egan, James E; Kinsky, Suzanne; Coulter, Robert W S; Friedman, M Reuel; Matthews, Derrick D; Klindera, Kent; Cowing, Michael
2016-12-01
Gay men, other men who have sex with men and transgender (GMT) populations suffer a disproportionate burden of HIV disease around the globe, which is directly attributable to the virulently homophobic environments in which many GMT people live. In addition to the direct effects of homophobia on GMT individuals, the ongoing marginalization of GMT people has meant that there is limited social capital on which effective HIV prevention and care programs can be built in many low- and middle-income countries (LMIC). Thus, meaningful responses meant to address the dire situation of GMT populations in LMIC settings must include a combination of bold and innovative approaches if efforts to end the epidemic are to have any chance of making a real difference. The HIV Scholars Program at the University of Pittsburgh's Center for LGBT Health Research is a prime example of a creative and dynamic approach to raising the expertise needed within GMT populations to respond to the global HIV/AIDS pandemic.
Sejdić, Ervin; Rothfuss, Michael; Stachel, Joshua R.; Franconi, Nicholas G.; Bocan, Kara; Lovell, Michael R.; Mickle, Marlin H.
2016-01-01
Translational research has recently been rediscovered as one of the basic tenants of engineering. Although many people have numerous ideas of how to accomplish this successfully, the fundamental method is to provide an innovative and creative environment. The University of Pittsburgh has been accomplishing this goal though a variety of methodologies. The contents of this paper are exemplary of what can be achieved though the interaction of students, staff, faculty and, in one example, high school teachers. While the projects completed within the groups involved in this paper have spanned other areas, the focus of this paper is on the biomedical devices, that is, towards improving and maintaining health in a variety of areas. The spirit of the translational research is discovery, invention, intellectual property protection, and the creation of value through the spinning off of companies while providing better health care and creating jobs. All but one of these projects involve wireless radio frequency energy for delivery. The remaining device can be wirelessly connected for data collection. PMID:23897048
NASA Technical Reports Server (NTRS)
1991-01-01
Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)
Desktop supercomputer: what can it do?
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Degtyarev, A.; Korkhov, V.
2017-12-01
The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.
ERIC Educational Resources Information Center
Roth, Ellen A., Ed.; Rubin, Judith A., Ed.
The proceedings of the 2nd annual Pittsburgh Conference on Art Therapy (with handicapped persons) consists of 44 items including full length papers, summaries of previously published papers, descriptions of workshops, and a limited number of abstracts (submitted by those who chose not to present a paper or workshop description). The papers are…
Detection of isolated cerebrovascular beta-amyloid with Pittsburgh compound B.
Greenberg, Steven M; Grabowski, Thomas; Gurol, M Edip; Skehan, Maureen E; Nandigam, R N Kaveer; Becker, John A; Garcia-Alloza, Monica; Prada, Claudia; Frosch, Matthew P; Rosand, Jonathan; Viswanathan, Anand; Smith, Eric E; Johnson, Keith A
2008-11-01
Imaging of cerebrovascular beta-amyloid (cerebral amyloid angiopathy) is complicated by the nearly universal overlap of this pathology with Alzheimer's pathology. We performed positron emission tomographic imaging with Pittsburgh Compound B on 42-year-old man with early manifestations of Iowa-type hereditary cerebral amyloid angiopathy, a form of the disorder with little or no plaque deposits of fibrillar beta-amyloid. The results demonstrated increased Pittsburgh Compound B retention selectively in occipital cortex, sparing regions typically labeled in Alzheimer's disease. These results offer compelling evidence that Pittsburgh Compound B positron emission tomography can noninvasively detect isolated cerebral amyloid angiopathy before overt signs of tissue damage such as hemorrhage or white matter lesions.
ERIC Educational Resources Information Center
Bozick, Robert; Gonzalez, Gabriella; Engberg, John
2015-01-01
The Pittsburgh Promise is a scholarship program that provides $5,000 per year toward college tuition for public high school graduates in Pittsburgh, Pennsylvania who earned a 2.5 GPA and a 90% attendance record. This study used a difference-in-difference design to assess whether the introduction of the Promise scholarship program directly…
Lifestyle characteristics assessment of Japanese in Pittsburgh, USA.
Hirooka, Nobutaka; Takedai, Teiichi; D'Amico, Frank
2012-04-01
Lifestyle-related chronic diseases such as cancer and cardiovascular disease are the greatest public health concerns. Evidence shows Japanese immigrants to a westernized environment have higher incidence of lifestyle-related diseases. However, little is known about lifestyle characteristics related to chronic diseases for Japanese in a westernized environment. This study is examining the gap in lifestyle by comparing the lifestyle prevalence for Japanese in the US with the Japanese National Data (the National Health and Nutrition Survey in Japan, J-NHANS) as well as the Japan National Health Promotion in the twenty-first Century (HJ21) goals. Japanese adults were surveyed in Pittsburgh, USA, regarding their lifestyle (e.g., diet, exercise, smoking, stress, alcohol, and oral hygiene). The prevalence was compared with J-NHANS and HJ21 goals. Ninety-three responded (response rate; 97.9%). Japanese men (n = 38) and women (n = 55) in Pittsburgh smoke less than Japanese in Japan (P < 0.001 for both genders). Japanese in Pittsburgh perform less physical activity in daily life and have lower prevalence of walking more than 1 h per day (P < 0.001 for both genders). Japanese women in Pittsburgh have significantly higher prevalence of stress than in Japan (P = 0.004). Japanese men in Pittsburgh do not reach HJ21 goal in weight management, BMI, use of medicine or alcohol to sleep, and sleep quality. Japanese women in Pittsburgh do not reach HJ21 goal in weight management and sleep quality. In conclusion, healthy lifestyle promotion including exercise and physical activity intervention for Japanese living in a westernized environment is warranted.
NASA Advanced Supercomputing (NAS) User Services Group
NASA Technical Reports Server (NTRS)
Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)
2002-01-01
This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.
NARSTO EPA SS PITTSBURGH GAS PM PROPERTY DATA
Atmospheric Science Data Center
2018-04-09
... Sizer Nephelometer Aerosol Collector SMPS - Scanning Mobility Particle Sizer Fluorescence Spectroscopy ... Get Google Earth Related Data: Environmental Protection Agency Supersites Pittsburgh, Pennsylvania ...
Mira: Argonne's 10-petaflops supercomputer
Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul
2018-02-13
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Adventures in Computational Grids
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Sometimes one supercomputer is not enough. Or your local supercomputers are busy, or not configured for your job. Or you don't have any supercomputers. You might be trying to simulate worldwide weather changes in real time, requiring more compute power than you could get from any one machine. Or you might be collecting microbiological samples on an island, and need to examine them with a special microscope located on the other side of the continent. These are the times when you need a computational grid.
Mira: Argonne's 10-petaflops supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papka, Michael; Coghlan, Susan; Isaacs, Eric
2013-07-03
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
Technology advances and market forces: Their impact on high performance architectures
NASA Technical Reports Server (NTRS)
Best, D. R.
1978-01-01
Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.
Floating point arithmetic in future supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.
1989-01-01
Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.
Steady viscous flow past a circular cylinder
NASA Technical Reports Server (NTRS)
Fornberg, B.
1984-01-01
Viscous flow past a circular cylinder becomes unstable around Reynolds number Re = 40. With a numerical technique based on Newton's method and made possible by the use of a supercomputer, steady (but unstable) solutions have been calculated up to Re = 400. It is found that the wake continues to grow in length approximately linearly with Re. However, in conflict with available asymptotic predictions, the width starts to increase very rapidly around Re = 300. All numerical calculations have been performed on the CDC CYBER 205 at the CDC Service Center in Arden Hills, Minnesota.
NASA Astrophysics Data System (ADS)
Su, Yan; Fan, Junyu; Zheng, Zhaoyang; Zhao, Jijun; Song, Huajie
2018-05-01
Not Available Project supported by the Science Challenge Project of China (Grant No. TZ2016001), the National Natural Science Foundation of China (Grant Nos. 11674046 and 11372053), the Fundamental Research Funds for the Central Universities of China (Grant No. DUT17GF203), the Opening Project of State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, China (Grant No. KFJJ16-01M), and the Supercomputing Center of Dalian University of Technology, China.
Understanding the Cray X1 System
NASA Technical Reports Server (NTRS)
Cheung, Samson
2004-01-01
This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms that are available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel paradigms, is used to understand some features of the X1 compiler. An example code from the NAS Parallel Benchmarks is used to demonstrate performance optimization on the X1 platform.
NARSTO EPA SS PITTSBURGH MET DATA
Atmospheric Science Data Center
2018-04-06
... Surface Pressure Solar Irradiance Ultraviolet Radiation Order Data: Earthdata Search: Order Data ... Earth Related Data: Environmental Protection Agency Supersites Pittsburgh, Pennsylvania SCAR-B ...
Targeting Histone Abnormality in Triple-Negative Breast Cancer
2015-08-01
Oncology Pediatric Hematology/Oncology BMT & CT Conference, Childrens Hospital of Pittsburgh, Pittsburgh, PA University of Pittsburgh, Winter...Release; Distribution Unlimited The views, opinions and/or findings contained in this report are those of the author( s ) and should not be construed as...ELEMENT NUMBER N/A 6. AUTHOR( S ) 5d. PROJECT NUMBER N/A Nancy E. Davidson, M.D. 5e. TASK NUMBER N/A E-Mail:davidsonne@upmc.edu 5f. WORK
The Effects of the Air Cast Sports Stirrup on Postural Sway in Normal Males
1993-01-01
Pittsburgh Pittsburgh, PAI Paula Sammarone, MA, ATC Date Rangos School of Health Sciences I Director, Athletic Training Duquesne University I Pittsburgh, PA I...sprain occurs, tearing of the ligaments also occur, which results in de- afferentization of the articular nerves (20). 1 Several treatment modalities...intermediate ranges. Articular nerve fibers have lower tensile strength than collagen fibers (21). Since most inversion injuries of the ankle result in some
Tracing Scientific Facilities through the Research Literature Using Persistent Identifiers
NASA Astrophysics Data System (ADS)
Mayernik, M. S.; Maull, K. E.
2016-12-01
Tracing persistent identifiers to their source publications is an easy task when authors use them, since it is a simple matter of matching the persistent identifier to the specific text string of the identifier. However, trying to understand if a publication uses the resource behind an identifier when such identifier is not referenced explicitly is a harder task. In this research, we explore the effectiveness of alternative strategies of associating publications with uses of the resource referenced by an identifier when it may not be explicit. This project is explored within the context of the NCAR supercomputer, where we are broadly interesting in the science that can be traced to the usage of the NCAR supercomputing facility, by way of the peer-reviewed research publications that utilize and reference it. In this project we explore several ways of drawing linkages between publications and the NCAR supercomputing resources. Identifying and compiling peer-reviewed publications related to NCAR supercomputer usage are explored via three sources: 1) User-supplied publications gathered through a community survey, 2) publications that were identified via manual searching of the Google scholar search index, and 3) publications associated with National Science Foundation (NSF) grants extracted from a public NSF database. These three sources represent three styles of collecting information about publications that likely imply usage of the NCAR supercomputing facilities. Each source has strengths and weaknesses, thus our discussion will explore how our publication identification and analysis methods vary in terms of accuracy, reliability, and effort. We will also discuss strategies for enabling more efficient tracing of research impacts of supercomputing facilities going forward through the assignment of a persistent web identifier to the NCAR supercomputer. While this solution has potential to greatly enhance our ability to trace the use of the facility through publications, authors must cite the facility consistently. It is therefore necessary to provide recommendations for citation and attribution behavior, and we will conclude our discussion with how such recommendations have improved tracing the supercomputer facility allowing for more consistent and widespread measurement of its impact.
2012-02-10
Then and Now: These images illustrate the dramatic improvement in NASA computing power over the last 23 years, and its effect on the number of grid points used for flow simulations. At left, an image from the first full-body Navier-Stokes simulation (1988) of an F-16 fighter jet showing pressure on the aircraft body, and fore-body streamlines at Mach 0.90. This steady-state solution took 25 hours using a single Cray X-MP processor to solve the 500,000 grid-point problem. Investigator: Neal Chaderjian, NASA Ames Research Center At right, a 2011 snapshot from a Navier-Stokes simulation of a V-22 Osprey rotorcraft in hover. The blade vortices interact with the smaller turbulent structures. This very detailed simulation used 660 million grid points, and ran on 1536 processors on the Pleiades supercomputer for 180 hours. Investigator: Neal Chaderjian, NASA Ames Research Center; Image: Tim Sandstrom, NASA Ames Research Center
DNA Data Bank of Japan: 30th anniversary.
Kodama, Yuichi; Mashima, Jun; Kosuge, Takehide; Kaminuma, Eli; Ogasawara, Osamu; Okubo, Kousaku; Nakamura, Yasukazu; Takagi, Toshihisa
2018-01-04
The DNA Data Bank of Japan (DDBJ) Center (http://www.ddbj.nig.ac.jp) has been providing public data services for 30 years since 1987. We are collecting nucleotide sequence data and associated biological information from researchers as a member of the International Nucleotide Sequence Database Collaboration (INSDC), in collaboration with the US National Center for Biotechnology Information and the European Bioinformatics Institute. The DDBJ Center also services the Japanese Genotype-phenotype Archive (JGA) with the National Bioscience Database Center to collect genotype and phenotype data of human individuals. Here, we outline our database activities for INSDC and JGA over the past year, and introduce submission, retrieval and analysis services running on our supercomputer system and their recent developments. Furthermore, we highlight our responses to the amended Japanese rules for the protection of personal information and the launch of the DDBJ Group Cloud service for sharing pre-publication data among research groups. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Energy Efficient Supercomputing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anypas, Katie
2014-10-17
Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.
Energy Efficient Supercomputing
Anypas, Katie
2018-05-07
Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.
Job Management Requirements for NAS Parallel Systems and Clusters
NASA Technical Reports Server (NTRS)
Saphir, William; Tanner, Leigh Ann; Traversat, Bernard
1995-01-01
A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.
Preparing for Local Adaptation: Understanding Flood Risk Perceptions in Pittsburgh
NASA Astrophysics Data System (ADS)
Klima, K.; Wong-Parodi, G.
2015-12-01
The City of Pittsburgh experiences numerous floods every year. Aging and insufficient infrastructure contribute to flash floods and to over 20 billion gallons of combined sewer overflows annually, contaminating Pittsburgh's streets, basements, and waterways. Climate change is expected to further exacerbate this problem by causing more intense and more frequent extreme precipitation events in Western Pennsylvania. For a stormwater adaptation plan to be implemented effectively, the City will need informed public support. One way to achieve public understanding and support is through effective communication of the risks, benefits, and uncertainties of local flooding hazards and adaptation methods. In order to develop these communications effectively, the city and its partners will need to know what knowledge and attitudes the residents of Pittsburgh already hold about flood risks. Here we seek to (1) identify Pittsburgh residents' knowledge level, risk perception and attitudes towards flooding and storm water management, and (2) pre-test communications meant to inform and empower Pittsburghers about flood risks and adaptation strategies. We conduct a city-wide survey of 10,000 Pittsburgh renters and homeowners from four life situations: high risk, above poverty; high-risk, below poverty; low risk, above poverty; and low-risk, below poverty. Mixed media recruitment strategies (online and paper-based solicitations guided/organized by community organizations) assist in reaching all subpopulations. Preliminary results suggest participants know what stormwater runoff is, but have a weak understanding of how stormwater interacts with natural and built systems. Furthermore, although participants have a good understanding of the difference between green and gray infrastructure, this does not translate into a change in their willingness to pay for green infrastructure adaptation. This suggests additional communications about flood risks and adaptation strategies.
Hu, Hao; Hong, Xingchen; Terstriep, Jeff; Liu, Yan; Finn, Michael P.; Rush, Johnathan; Wendel, Jeffrey; Wang, Shaowen
2016-01-01
Geospatial data, often embedded with geographic references, are important to many application and science domains, and represent a major type of big data. The increased volume and diversity of geospatial data have caused serious usability issues for researchers in various scientific domains, which call for innovative cyberGIS solutions. To address these issues, this paper describes a cyberGIS community data service framework to facilitate geospatial big data access, processing, and sharing based on a hybrid supercomputer architecture. Through the collaboration between the CyberGIS Center at the University of Illinois at Urbana-Champaign (UIUC) and the U.S. Geological Survey (USGS), a community data service for accessing, customizing, and sharing digital elevation model (DEM) and its derived datasets from the 10-meter national elevation dataset, namely TopoLens, is created to demonstrate the workflow integration of geospatial big data sources, computation, analysis needed for customizing the original dataset for end user needs, and a friendly online user environment. TopoLens provides online access to precomputed and on-demand computed high-resolution elevation data by exploiting the ROGER supercomputer. The usability of this prototype service has been acknowledged in community evaluation.
Simulation and study of stratified flows around finite bodies
NASA Astrophysics Data System (ADS)
Gushchin, V. A.; Matyushin, P. V.
2016-06-01
The flows past a sphere and a square cylinder of diameter d moving horizontally at the velocity U in a linearly density-stratified viscous incompressible fluid are studied. The flows are described by the Navier-Stokes equations in the Boussinesq approximation. Variations in the spatial vortex structure of the flows are analyzed in detail in a wide range of dimensionless parameters (such as the Reynolds number Re = Ud/ ν and the internal Froude number Fr = U/( Nd), where ν is the kinematic viscosity and N is the buoyancy frequency) by applying mathematical simulation (on supercomputers of Joint Supercomputer Center of the Russian Academy of Sciences) and three-dimensional flow visualization. At 0.005 < Fr < 100, the classification of flow regimes for the sphere (for 1 < Re < 500) and for the cylinder (for 1 < Re < 200) is improved. At Fr = 0 (i.e., at U = 0), the problem of diffusion-induced flow past a sphere leading to the formation of horizontal density layers near the sphere's upper and lower poles is considered. At Fr = 0.1 and Re = 50, the formation of a steady flow past a square cylinder with wavy hanging density layers in the wake is studied in detail.
An efficient framework for Java data processing systems in HPC environments
NASA Astrophysics Data System (ADS)
Fries, Aidan; Castañeda, Javier; Isasi, Yago; Taboada, Guillermo L.; Portell de Mora, Jordi; Sirvent, Raül
2011-11-01
Java is a commonly used programming language, although its use in High Performance Computing (HPC) remains relatively low. One of the reasons is a lack of libraries offering specific HPC functions to Java applications. In this paper we present a Java-based framework, called DpcbTools, designed to provide a set of functions that fill this gap. It includes a set of efficient data communication functions based on message-passing, thus providing, when a low latency network such as Myrinet is available, higher throughputs and lower latencies than standard solutions used by Java. DpcbTools also includes routines for the launching, monitoring and management of Java applications on several computing nodes by making use of JMX to communicate with remote Java VMs. The Gaia Data Processing and Analysis Consortium (DPAC) is a real case where scientific data from the ESA Gaia astrometric satellite will be entirely processed using Java. In this paper we describe the main elements of DPAC and its usage of the DpcbTools framework. We also assess the usefulness and performance of DpcbTools through its performance evaluation and the analysis of its impact on some DPAC systems deployed in the MareNostrum supercomputer (Barcelona Supercomputing Center).
High Specific Heat Dielectrics and Kapitza Resistance at Dielectric Boundaries.
1984-09-12
RD-i4S476 AT DIELECTRIC BOUND..(U) WESTINGHOUSE RESEARCH AND DEVELOPMENT CENTER PITTSBURGH PA P Wd ECKELS ET AL. UNCASIFID12 SEP 84 84-9C9- KAPIT -Ri...measurement of the specific heat • and thermal conductivity of the anCd/, d 4 spinels and of several (The structure heavy metal hal tes in the t...included the measurement of the spe- cific heat and thermal conductivity of the CdCr 2O4 and ZnCr2O4 spinels and of several CsCI structure heavy metal
History of the chemical heritage foundation scientific instrumentation museum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraro, J. R.; Brame, E. G., Jr.; Chemistry
It all began in March 1990 at the 40th Pittsburgh Conference (PittCon) meeting in the Jacob Javitz Convention Center in New York, New York. Coauthor John R. Ferraro stopped at the Beckman Booth and began discussing with Robert Jarnutowski, at the time an engineer with Beckman Instruments (Fullerton, CA), the impending 50th anniversary of the landmark instrument, the Beckman DU spectrophotometer, in 1991. The thought entered Ferraro's mind that landmark instruments such as this one should be preserved in a museum, Germany, England, and Italy host scientific instrumentation museums.
1984-06-21
H.L. Beach, NASA Langle Research Center Topic: IGNITION/COMBUSTION ENHANCEMENT 10:30 - 11:00 M. Lavid. ML Energia , Inc. 11:00 - 11:30 W. Braun and...F49620-83-C-0133) Principal Investigator: Moshe Lavid ML ENERGIA , Inc. P.O. Box 1468 Princeton, NJ 08542 SUMMARY/OVERVIEW: The radiative concept to...an indine resonance lamp and a solar -blind PhotomultiPlier. 2 nder- the%,& Cir esm.tanr&e the temPerAtre mod1Jlation CArn he r.qlrujated rom a
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
Supercomputing Drives Innovation - Continuum Magazine | NREL
years, NREL scientists have used supercomputers to simulate 3D models of the primary enzymes and Scientist, discuss a 3D model of wind plant aerodynamics, showing low velocity wakes and impact on
Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.
The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.
Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.
Berger, S B; Reis, D J
1995-02-01
We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.
Intelligent supercomputers: the Japanese computer sputnik
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, G.
1983-11-01
Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.
Research on Spectroscopy, Opacity, and Atmospheres
NASA Technical Reports Server (NTRS)
Kurucz, Robert L.
1999-01-01
A web site has been set up to make the calculations accessible; (i.e., cfakus.harvard.edu) This data can also be accessed by FTP. It has all of the atomic and diatomic molecular data, tables of distribution function opacities, grids of model atmospheres, colors, fluxes, etc, programs that are ready for distribution, and most of recent papers developed during this grant. Atlases and computed spectra will be added as they are completed. New atomic and molecular calculations will be added as they are completed. The atomic programs that had been running on a Cray at the San Diego Supercomputer Center can now run on the Vaxes and Alpha. The work started with Ni and Co because there were new laboratory analyses that included isotopic and hyperfine splitting. Those calculations are described in the appended abstract for the 6th Atomic Spectroscopy and oscillator Strengths meeting in Victoria last summer. A surprising finding is that quadrupole transitions have been grossly in error because mixing with higher levels has not been included. All levels up through n=9 for Fe I and II, the spectra for which the most information is available, are now included. After Fe I and Fe II, all other spectra are "easy". ATLAS12, the opacity sampling program for computing models with arbitrary abundances, has been put on the web server. A new distribution function opacity program for workstations that replaces the one used on the Cray at the San Diego Supercomputer Center has been written. Each set of abundances would take 100 Cray hours costing $100,000.
Towards the Interoperability of Web, Database, and Mass Storage Technologies for Petabyte Archives
NASA Technical Reports Server (NTRS)
Moore, Reagan; Marciano, Richard; Wan, Michael; Sherwin, Tom; Frost, Richard
1996-01-01
At the San Diego Supercomputer Center, a massive data analysis system (MDAS) is being developed to support data-intensive applications that manipulate terabyte sized data sets. The objective is to support scientific application access to data whether it is located at a Web site, stored as an object in a database, and/or storage in an archival storage system. We are developing a suite of demonstration programs which illustrate how Web, database (DBMS), and archival storage (mass storage) technologies can be integrated. An application presentation interface is being designed that integrates data access to all of these sources. We have developed a data movement interface between the Illustra object-relational database and the NSL UniTree archival storage system running in a production mode at the San Diego Supercomputer Center. With this interface, an Illustra client can transparently access data on UniTree under the control of the Illustr DBMS server. The current implementation is based on the creation of a new DBMS storage manager class, and a set of library functions that allow the manipulation and migration of data stored as Illustra 'large objects'. We have extended this interface to allow a Web client application to control data movement between its local disk, the Web server, the DBMS Illustra server, and the UniTree mass storage environment. This paper describes some of the current approaches successfully integrating these technologies. This framework is measured against a representative sample of environmental data extracted from the San Diego Ba Environmental Data Repository. Practical lessons are drawn and critical research areas are highlighted.
Introducing Mira, Argonne's Next-Generation Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-03-19
Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.
Green Supercomputing at Argonne
Pete Beckman
2017-12-09
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputingâeverything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.
Virtual Environments in Scientific Visualization
NASA Technical Reports Server (NTRS)
Bryson, Steve; Lisinski, T. A. (Technical Monitor)
1994-01-01
Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.
Relativistic Collisions of Highly-Charged Ions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ionescu, Dorin; Belkacem, Ali
1998-11-19
The physics of elementary atomic processes in relativistic collisions between highly-charged ions and atoms or other ions is briefly discussed, and some recent theoretical and experimental results in this field are summarized. They include excitation, capture, ionization, and electron-positron pair creation. The numerical solution of the two-center Dirac equation in momentum space is shown to be a powerful nonperturbative method for describing atomic processes in relativistic collisions involving heavy and highly-charged ions. By propagating negative-energy wave packets in time the evolution of the QED vacuum around heavy ions in relativistic motion is investigated. Recent results obtained from numerical calculations usingmore » massively parallel processing on the Cray-T3E supercomputer of the National Energy Research Scientific Computer Center (NERSC) at Berkeley National Laboratory are presented.« less
James Williamson d/b/a Golden Triangle Builders Information Sheet
James Williamson d/b/a Golden Triangle Builders (the Company) is located in Pittsburgh, Pennsylvania. The settlement involves renovation activities conducted at property constructed prior to 1978, located in Pittsburgh, Pennsylvania.
Advanced Computing for Manufacturing.
ERIC Educational Resources Information Center
Erisman, Albert M.; Neves, Kenneth W.
1987-01-01
Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)
Martinelli; Townsend; Meltzer; Villemagne
2000-07-01
Purpose: At the University Of Pittsburgh Medical Center, over 100 oncology studies have been performed using a combined PET/CT scanner. The scanner is a prototype, which combines clinical PET and clinical CT imaging in a single unit. The sensitivity achieved using three-dimensional PET imaging as well as the use of the CT for attenuation correction and image fusion make the device ideal for clinical oncology. Clinical indications imaged on the PET/CT scanner include, but are not limited to, tumor staging, solitary pulmonary nodule evaluation, and evaluation of tumor reoccurrence in melanoma, lymphoma, colorectal cancer, lung cancer, pancreatic cancer, head and neck cancer, and renal cancer.Methods: For all studies, seven millicuries of F(18)-fluorodeoxyglucose is injected and a forty-five minute uptake period is allowed prior to positioning the patient in the scanner. A helical CT scan is acquired over the region, or regions of interest followed by a multi-bed whole body PET scan for the same axial extent. The CT scan is used to correct the PET data for attenuation. The entire imaging session lasts 1-1.5 hours depending on the number of beds acquired, and is generally well tolerated by the patient.Results and Conclusion: Based on our experience in over 100 studies, combined PET/CT imaging offers significant advantages, including more accurate localization of focal uptake, distinction of pathology from normal physiological uptake, and improvements in evaluating therapy. These benefits will be illustrated with a number of representative, fully documented studies.
PoPLAR: Portal for Petascale Lifescience Applications and Research
2013-01-01
Background We are focusing specifically on fast data analysis and retrieval in bioinformatics that will have a direct impact on the quality of human health and the environment. The exponential growth of data generated in biology research, from small atoms to big ecosystems, necessitates an increasingly large computational component to perform analyses. Novel DNA sequencing technologies and complementary high-throughput approaches--such as proteomics, genomics, metabolomics, and meta-genomics--drive data-intensive bioinformatics. While individual research centers or universities could once provide for these applications, this is no longer the case. Today, only specialized national centers can deliver the level of computing resources required to meet the challenges posed by rapid data growth and the resulting computational demand. Consequently, we are developing massively parallel applications to analyze the growing flood of biological data and contribute to the rapid discovery of novel knowledge. Methods The efforts of previous National Science Foundation (NSF) projects provided for the generation of parallel modules for widely used bioinformatics applications on the Kraken supercomputer. We have profiled and optimized the code of some of the scientific community's most widely used desktop and small-cluster-based applications, including BLAST from the National Center for Biotechnology Information (NCBI), HMMER, and MUSCLE; scaled them to tens of thousands of cores on high-performance computing (HPC) architectures; made them robust and portable to next-generation architectures; and incorporated these parallel applications in science gateways with a web-based portal. Results This paper will discuss the various developmental stages, challenges, and solutions involved in taking bioinformatics applications from the desktop to petascale with a front-end portal for very-large-scale data analysis in the life sciences. Conclusions This research will help to bridge the gap between the rate of data generation and the speed at which scientists can study this data. The ability to rapidly analyze data at such a large scale is having a significant, direct impact on science achieved by collaborators who are currently using these tools on supercomputers. PMID:23902523
Supercomputers Join the Fight against Cancer – U.S. Department of Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.
City of Pittsburgh 2012 Technical Assistance Project Documents
Each report describes a design for a section of the City of Pittsburgh that was chosen in partnership with 3 Rivers Wet Wet Weather to address stormwater management and provide other green infrastructure benefits.
NAS-current status and future plans
NASA Technical Reports Server (NTRS)
Bailey, F. R.
1987-01-01
The Numerical Aerodynamic Simulation (NAS) has met its first major milestone, the NAS Processing System Network (NPSN) Initial Operating Configuration (IOC). The program has met its goal of providing a national supercomputer facility capable of greatly enhancing the Nation's research and development efforts. Furthermore, the program is fulfilling its pathfinder role by defining and implementing a paradigm for supercomputing system environments. The IOC is only the begining and the NAS Program will aggressively continue to develop and implement emerging supercomputer, communications, storage, and software technologies to strengthen computations as a critical element in supporting the Nation's leadership role in aeronautics.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
This document briefly describes the use of the CRAY supercomputers that are an integral part of the Supercomputing Network Subsystem of the Central Scientific Computing Complex at LaRC. Features of the CRAY supercomputers are covered, including: FORTRAN, C, PASCAL, architectures of the CRAY-2 and CRAY Y-MP, the CRAY UNICOS environment, batch job submittal, debugging, performance analysis, parallel processing, utilities unique to CRAY, and documentation. The document is intended for all CRAY users as a ready reference to frequently asked questions and to more detailed information contained in the vendor manuals. It is appropriate for both the novice and the experienced user.
Scaling of data communications for an advanced supercomputer network
NASA Technical Reports Server (NTRS)
Levin, E.; Eaton, C. K.; Young, Bruce
1986-01-01
The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond
2015-01-01
The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
NASA Astrophysics Data System (ADS)
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
Modeling Subsurface Reactive Flows Using Leadership-Class Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Richard T; Hammond, Glenn; Lichtner, Peter
2009-01-01
We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.
Airport Simulations Using Distributed Computational Resources
NASA Technical Reports Server (NTRS)
McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)
2002-01-01
The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.
Roadrunner Supercomputer Breaks the Petaflop Barrier
Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin
2017-12-09
At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1
QCD on the BlueGene/L Supercomputer
NASA Astrophysics Data System (ADS)
Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.
2005-03-01
In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.
Supercomputer Issues from a University Perspective.
ERIC Educational Resources Information Center
Beering, Steven C.
1984-01-01
Discusses issues related to the access of and training of university researchers in using supercomputers, considering National Science Foundation's (NSF) role in this area, microcomputers on campuses, and the limited use of existing telecommunication networks. Includes examples of potential scientific projects (by subject area) utilizing…
The Pulse of Allegheny County and Pittsburgh.
DOT National Transportation Integrated Search
2016-01-01
Cities are increasingly equipped with low-resolution cameras. They are cheap to : buy, install, and maintain, and thus are usually the choice of departments of : transportation and their contractors. Pittsburgh or New York City have networks of : hun...
RadNet Air Data From Pittsburgh, PA
This page presents radiation air monitoring and air filter analysis data for Pittsburgh, PA from EPA's RadNet system. RadNet is a nationwide network of monitoring stations that measure radiation in air, drinking water and precipitation.
NASA Astrophysics Data System (ADS)
Gandhi, Pooja
Recent studies have shown that the number of women faculty in academic medicine is much lesser than the number of women that are graduating from medical schools. Many academic institutes face the challenge of retaining talented faculty and this attrition from academic medicine prevents career advancement of women faculty. This case study attempts to identify some of the reasons for dissatisfaction that may be related to the attrition of women medical faculty at the University of Pittsburgh, School of Medicine. Data was collected using a job satisfaction survey, which consisted of various constructs that are part of a faculty's job and proxy measures to gather the faculty's intent to leave their current position at the University of Pittsburgh or academic medicine in general. The survey results showed that although women faculty were satisfied with their job at the University of Pittsburgh, there are some important factors that influenced their decision of potentially dropping out. The main reasons cited by the women faculty were related to funding pressures, work-life balance, mentoring of junior faculty and the amount of time spent on clinical responsibilities. The analysis of proxy measures showed that if women faculty decided to leave University of Pittsburgh, it would most probably be due to better opportunity elsewhere followed by pressure to get funding. The results of this study aim to provide the School of Medicine at the University of Pittsburgh with information related to attrition of its women faculty and provide suggestions for implications for policy to retain their women faculty.
12. Photocopy of lithograph (source unknown) The Armor Lithograph Company, ...
12. Photocopy of lithograph (source unknown) The Armor Lithograph Company, Ltd., Pittsburgh, Pennsylvania, ca. 1888 COURTHOUSE AND JAIL, FROM THE WEST - Allegheny County Courthouse & Jail, 436 Grant Street (Courthouse), 420 Ross Street (Jail), Pittsburgh, Allegheny County, PA
76 FR 57796 - Petition for Waiver of Compliance
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-16
... CP Solomon at MP-PT 352.5 near Pittsburgh, PA, with an absolute block to be established in advance of... Solomon. 3. Operations on the Fort Wayne Line, Pittsburgh Division, from CP Rochester at MP-PC 29.5, near...
NASA Astrophysics Data System (ADS)
Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.
2016-12-01
Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we will show the latest simulation results using the petascale supercomputer and problems from the use of these supercomputer systems.
1978-02-01
Trans. ASME, Vol. 81, 1959, pp. 259- 264 . 112 0 C> 0 LJj 0 CD 0 D ~) . [") r "-’ . 1’ n -- 1 . 2 0 1 . lj 0 1. :iO 1 • 13 0 ? . (JO p;a...n ntout Compute determinant elements forb n, Comoute and write backsc~tter cross-section\\ (Figure 2.2-1) 264 J. BACKSCATTER CROSS-SECTION FOR A...Overrelaxation Iteration Methods," Report WAPD -TM-1038, Bettis Atomic Power Laboratory, Westinghouse Electric Corp., Pittsburgh, Pennsylvania. 10
16. OPERATOR STAND. OPERATOR STOOD BETWEEN RAILINGS AND CONTROLLED DREDGING ...
16. OPERATOR STAND. OPERATOR STOOD BETWEEN RAILINGS AND CONTROLLED DREDGING OPERATIONS USING TWO LEVERS FROM CEILING, THREE LEVELS ON THE FLOOR, AND TWO FLOOR PEDDLES. RIGHT HAND CONTROLLED SHOT GUN SWINGER (BOOM MOVE TO RIGHT WHEN PUSHED FORWARD, LEFT WHEN PULLED BACK, AND, IF LUCKY, STOPPED WHEN IN CENTER POSITION). LEFT HAND CONTROLLED THROTTLE. FLOOR LEVER AND FLOOR PEDDLE ON LEFT CONTROLLED THE BACKING LINE FRICTION. MIDDLE LEVER AND PEDDLE, STUCK IN FLOOR CONTROLLED THE MAIN HOIST FRICTION. LEVER ON RIGHT CONTROLLED THE CYLINDER DRAIN VALVE. - Dredge CINCINNATI, Docked on Ohio River at foot of Lighthill Street, Pittsburgh, Allegheny County, PA
Finite element methods on supercomputers - The scatter-problem
NASA Technical Reports Server (NTRS)
Loehner, R.; Morgan, K.
1985-01-01
Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.
Code IN Exhibits - Supercomputing 2000
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)
2000-01-01
The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.
Probing the cosmic causes of errors in supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Cosmic rays from outer space are causing errors in supercomputers. The neutrons that pass through the CPU may be causing binary data to flip leading to incorrect calculations. Los Alamos National Laboratory has developed detectors to determine how much data is being corrupted by these cosmic particles.
The center for causal discovery of biomedical knowledge from big data.
Cooper, Gregory F; Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard
2015-11-01
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lowe, Val J.; Graff-Radford, Neill R.; Liesinger, Amanda M.; Cannon, Ashley; Przybelski, Scott A.; Rawal, Bhupendra; Parisi, Joseph E.; Petersen, Ronald C.; Kantarci, Kejal; Ross, Owen A.; Duara, Ranjan; Knopman, David S.; Jack, Clifford R.; Dickson, Dennis W.
2015-01-01
Thal amyloid phase, which describes the pattern of progressive amyloid-β plaque deposition in Alzheimer’s disease, was incorporated into the latest National Institute of Ageing – Alzheimer’s Association neuropathologic assessment guidelines. Amyloid biomarkers (positron emission tomography and cerebrospinal fluid) were included in clinical diagnostic guidelines for Alzheimer’s disease dementia published by the National Institute of Ageing – Alzheimer’s Association and the International Work group. Our first goal was to evaluate the correspondence of Thal amyloid phase to Braak tangle stage and ante-mortem clinical characteristics in a large autopsy cohort. Second, we examined the relevance of Thal amyloid phase in a prospectively-followed autopsied cohort who underwent ante-mortem 11C-Pittsburgh compound B imaging; using the large autopsy cohort to broaden our perspective of 11C-Pittsburgh compound B results. The Mayo Clinic Jacksonville Brain Bank case series (n = 3618) was selected regardless of ante-mortem clinical diagnosis and neuropathologic co-morbidities, and all assigned Thal amyloid phase and Braak tangle stage using thioflavin-S fluorescent microscopy. 11C-Pittsburgh compound B studies from Mayo Clinic Rochester were available for 35 participants scanned within 2 years of death. Cortical 11C-Pittsburgh compound B values were calculated as a standard uptake value ratio normalized to cerebellum grey/white matter. In the high likelihood Alzheimer’s disease brain bank cohort (n = 1375), cases with lower Thal amyloid phases were older at death, had a lower Braak tangle stage, and were less frequently APOE-ε4 positive. Regression modelling in these Alzheimer’s disease cases, showed that Braak tangle stage, but not Thal amyloid phase predicted age at onset, disease duration, and final Mini-Mental State Examination score. In contrast, Thal amyloid phase, but not Braak tangle stage or cerebral amyloid angiopathy predicted 11C-Pittsburgh compound B standard uptake value ratio. In the 35 cases with ante-mortem amyloid imaging, a transition between Thal amyloid phases 1 to 2 seemed to correspond to 11C-Pittsburgh compound B standard uptake value ratio of 1.4, which when using our pipeline is the cut-off point for detection of clear amyloid-positivity regardless of clinical diagnosis. Alzheimer’s disease cases who were older and were APOE-ε4 negative tended to have lower amyloid phases. Although Thal amyloid phase predicted clinical characteristics of Alzheimer’s disease patients, the pre-mortem clinical status was driven by Braak tangle stage. Thal amyloid phase correlated best with 11C-Pittsburgh compound B values, but not Braak tangle stage or cerebral amyloid angiopathy. The 11C-Pittsburgh compound B cut-off point value of 1.4 was approximately equivalent to a Thal amyloid phase of 1–2. PMID:25805643
NASA Astrophysics Data System (ADS)
Detrick, R. S.; Clark, D.; Gaylord, A.; Goldsmith, R.; Helly, J.; Lemmond, P.; Lerner, S.; Maffei, A.; Miller, S. P.; Norton, C.; Walden, B.
2005-12-01
The Scripps Institution of Oceanography (SIO) and the Woods Hole Oceanographic Institution (WHOI) have joined forces with the San Diego Supercomputer Center to build a testbed for multi-institutional archiving of shipboard and deep submergence vehicle data. Support has been provided by the Digital Archiving and Preservation program funded by NSF/CISE and the Library of Congress. In addition to the more than 92,000 objects stored in the SIOExplorer Digital Library, the testbed will provide access to data, photographs, video images and documents from WHOI ships, Alvin submersible and Jason ROV dives, and deep-towed vehicle surveys. An interactive digital library interface will allow combinations of distributed collections to be browsed, metadata inspected, and objects displayed or selected for download. The digital library architecture, and the search and display tools of the SIOExplorer project, are being combined with WHOI tools, such as the Alvin Framegrabber and the Jason Virtual Control Van, that have been designed using WHOI's GeoBrowser to handle the vast volumes of digital video and camera data generated by Alvin, Jason and other deep submergence vehicles. Notions of scalability will be tested, as data volumes range from 3 CDs per cruise to 200 DVDs per cruise. Much of the scalability of this proposal comes from an ability to attach digital library data and metadata acquisition processes to diverse sensor systems. We are able to run an entire digital library from a laptop computer as well as from supercomputer-center-size resources. It can be used, in the field, laboratory or classroom, covering data from acquisition-to-archive using a single coherent methodology. The design is an open architecture, supporting applications through well-defined external interfaces maintained as an open-source effort for community inclusion and enhancement.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
Site in a box: Improving the Tier 3 experience
NASA Astrophysics Data System (ADS)
Dost, J. M.; Fajardo, E. M.; Jones, T. R.; Martin, T.; Tadel, A.; Tadel, M.; Würthwein, F.
2017-10-01
The Pacific Research Platform is an initiative to interconnect Science DMZs between campuses across the West Coast of the United States over a 100 gbps network. The LHC @ UC is a proof of concept pilot project that focuses on interconnecting 6 University of California campuses. It is spearheaded by computing specialists from the UCSD Tier 2 Center in collaboration with the San Diego Supercomputer Center. A machine has been shipped to each campus extending the concept of the Data Transfer Node to a cluster in a box that is fully integrated into the local compute, storage, and networking infrastructure. The node contains a full HTCondor batch system, and also an XRootD proxy cache. User jobs routed to the DTN can run on 40 additional slots provided by the machine, and can also flock to a common GlideinWMS pilot pool, which sends jobs out to any of the participating UCs, as well as to Comet, the new supercomputer at SDSC. In addition, a common XRootD federation has been created to interconnect the UCs and give the ability to arbitrarily export data from the home university, to make it available wherever the jobs run. The UC level federation also statically redirects to either the ATLAS FAX or CMS AAA federation respectively to make globally published datasets available, depending on end user VO membership credentials. XRootD read operations from the federation transfer through the nearest DTN proxy cache located at the site where the jobs run. This reduces wide area network overhead for subsequent accesses, and improves overall read performance. Details on the technical implementation, challenges faced and overcome in setting up the infrastructure, and an analysis of usage patterns and system scalability will be presented.
Real science at the petascale.
Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V
2009-06-28
We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.
Multi-threaded ATLAS simulation on Intel Knights Landing processors
NASA Astrophysics Data System (ADS)
Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration
2017-10-01
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.
Performance Analysis, Modeling and Scaling of HPC Applications and Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav
2016-01-13
E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less
The Sky's the Limit When Super Students Meet Supercomputers.
ERIC Educational Resources Information Center
Trotter, Andrew
1991-01-01
In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…
Access to Supercomputers. Higher Education Panel Report 69.
ERIC Educational Resources Information Center
Holmstrom, Engin Inel
This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…
NOAA announces significant investment in next generation of supercomputers
provide more timely, accurate weather forecasts. (Credit: istockphoto.com) Today, NOAA announced the next phase in the agency's efforts to increase supercomputing capacity to provide more timely, accurate turn will lead to more timely, accurate, and reliable forecasts." Ahead of this upgrade, each of
Developments in the simulation of compressible inviscid and viscous flow on supercomputers
NASA Technical Reports Server (NTRS)
Steger, J. L.; Buning, P. G.
1985-01-01
In anticipation of future supercomputers, finite difference codes are rapidly being extended to simulate three-dimensional compressible flow about complex configurations. Some of these developments are reviewed. The importance of computational flow visualization and diagnostic methods to three-dimensional flow simulation is also briefly discussed.
NASA Technical Reports Server (NTRS)
Smarr, Larry; Press, William; Arnett, David W.; Cameron, Alastair G. W.; Crutcher, Richard M.; Helfand, David J.; Horowitz, Paul; Kleinmann, Susan G.; Linsky, Jeffrey L.; Madore, Barry F.
1991-01-01
The applications of computers and data processing to astronomy are discussed. Among the topics covered are the emerging national information infrastructure, workstations and supercomputers, supertelescopes, digital astronomy, astrophysics in a numerical laboratory, community software, archiving of ground-based observations, dynamical simulations of complex systems, plasma astrophysics, and the remote control of fourth dimension supercomputers.
Impact of Peer Mentoring on Freshmen Engineering Students
ERIC Educational Resources Information Center
Budny, Dan; Paul, Cheryl; Newborg, Beth Bateman
2010-01-01
The transition from high school to college can be very difficult for many students. At the University of Pittsburgh School of Engineering, weaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabyear University of Pittsburgh School of Engineering students are required to register for and attend a large group lecture course,…
Martin Luther King, Jr. East Busway in Pittsburgh, PA
DOT National Transportation Integrated Search
1987-10-01
The Port Authority of Allegheny County (PAT), the primary public transit operator in Pittsburgh, PA, built an exclusive roadway for buses which opened for service in February 1983. The two-lane, 6.8-mile facility serves the eastern suburbs via a righ...
ERIC Educational Resources Information Center
Adams, Caralee
2011-01-01
This article features five schools (John P. Oldham Elementary, Norwood, Massachusetts; R. J. Richey Elementary, Burnet, Texas; Pittsburgh Carmalt Science and Technology Academy, Pittsburgh, Pennsylvania; John D. Shaw Elementary, Wasilla, Alaska; and Springville K-8, Portland Oregon) that offer five promising practices. From fourth graders learning…
The Pollution Prevention Opportunity Assessments (PPOA) summarized here were conducted at the following representative Army Corps of Engineers (USAGE) Civil Works facilities: Pittsburgh Engineering Warehouse and Repair Station (PEWARS) and Emsworth Locks and Dams in Pittsburgh, P...
ERIC Educational Resources Information Center
Library Journal, 1973
1973-01-01
Everything, for the second year in a row, seemed to be looking up for the Special Libraries Association as it gathered in Pittsburgh in June. This article is a brief report on the conference. (Author/SJ)
Pulmonary function test results on 224 parochial schoolchildren collected during and after the Pittsburgh air pollution episode of November 1975 were reanalyzed to determine whether a small subgroup of susceptible children could be defined. Individual regressions of three-quarter...
THE PITTSBURGH AIR POLLUTION EPISODE OF NOVEMBER 17-21 1975: AIR QUALITY
In November 1975 a serious air stagnation problem developed over Western Pennsylvania, with extremely heavy air pollution in the Pittsburgh area. The U.S. Environmental Protection Agency's Health Effects Research Laboratory (HERL) mobilized a team of air monitoring and epidemiolo...
Supercomputer use in orthopaedic biomechanics research: focus on functional adaptation of bone.
Hart, R T; Thongpreda, N; Van Buskirk, W C
1988-01-01
The authors describe two biomechanical analyses carried out using numerical methods. One is an analysis of the stress and strain in a human mandible, and the other analysis involves modeling the adaptive response of a sheep bone to mechanical loading. The computing environment required for the two types of analyses is discussed. It is shown that a simple stress analysis of a geometrically complex mandible can be accomplished using a minicomputer. However, more sophisticated analyses of the same model with dynamic loading or nonlinear materials would require supercomputer capabilities. A supercomputer is also required for modeling the adaptive response of living bone, even when simple geometric and material models are use.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1988-01-01
A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.
Analysis of fine coal pneumatic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathur, M.P.; Rohatgi, N.D.; Klinzing, G.E.
1987-01-01
Many fossil fuel energy processes depend on the movement of solids by pneumatic transport. Despite the considerable amount of work reported in the literature on pneumatic transport, the design of new industrial systems for new products continues to rely to a great extent on empiricism. A pilot-scale test facility has been constructed at Pittsburgh Energy Technology Center (PETC) and is equipped with modern sophisticated measuring techniques (such as Pressure Transducers, Auburn Monitors, Micro Motion Mass flowmeters) and an automatic computer-controlled data acquisition system to study the effects of particle pneumatic transport. Pittsburgh Seam and Montana rosebud coals of varying sizemore » consist and moisture content were tested in the atmospheric and pressurized coal flow test loops (AP/CFTL and HP/CFTL) at PETC. The system parameters included conveying gas velocity, injector tank pressure, screw conveyor speed, pipe radius, and pipe bends. In the following report, results from the coal flow tests were presented and analyzed. Existing theories and correlations on two-phase flows were reviewed. Experimental data were compared with values calculated from empirically or theoretically derived equations available in the literature, and new correlations were proposed, when applicable, to give a better interpretation of the data and a better understanding of the various flow regimes involved in pneumatic transport. 55 refs., 56 figs., 6 tabs.« less
Gupta, Dilip; Saul, Melissa; Gilbertson, John
2004-02-01
We evaluated a comprehensive deidentification engine at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA, that uses a complex set of rules, dictionaries, pattern-matching algorithms, and the Unified Medical Language System to identify and replace identifying text in clinical reports while preserving medical information for sharing in research. In our initial data set of 967 surgical pathology reports, the software did not suppress outside (103), UPMC (47), and non-UPMC (56) accession numbers; dates (7); names (9) or initials (25) of case pathologists; or hospital or laboratory names (46). In 150 reports, some clinical information was suppressed inadvertently (overmarking). The engine retained eponymic patient names, eg, Barrett and Gleason. In the second evaluation (1,000 reports), the software did not suppress outside (90) or UPMC (6) accession numbers or names (4) or initials (2) of case pathologists. In the third evaluation, the software removed names of patients, hospitals (297/300), pathologists (297/300), transcriptionists, residents and physicians, dates of procedures, and accession numbers (298/300). By the end of the evaluation, the system was reliably and specifically removing safe-harbor identifiers and producing highly readable deidentified text without removing important clinical information. Collaboration between pathology domain experts and system developers and continuous quality assurance are needed to optimize ongoing deidentification processes.
Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.
ERIC Educational Resources Information Center
Kiernan, Vincent
1999-01-01
Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…
The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.
ERIC Educational Resources Information Center
Young, Jeffrey R.
1997-01-01
In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
NASA Astrophysics Data System (ADS)
Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.
2016-06-01
High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3 + 1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.
Networking Technologies Enable Advances in Earth Science
NASA Technical Reports Server (NTRS)
Johnson, Marjory; Freeman, Kenneth; Gilstrap, Raymond; Beck, Richard
2004-01-01
This paper describes an experiment to prototype a new way of conducting science by applying networking and distributed computing technologies to an Earth Science application. A combination of satellite, wireless, and terrestrial networking provided geologists at a remote field site with interactive access to supercomputer facilities at two NASA centers, thus enabling them to validate and calibrate remotely sensed geological data in near-real time. This represents a fundamental shift in the way that Earth scientists analyze remotely sensed data. In this paper we describe the experiment and the network infrastructure that enabled it, analyze the data flow during the experiment, and discuss the scientific impact of the results.
Computational fluid dynamics in a marine environment
NASA Technical Reports Server (NTRS)
Carlson, Arthur D.
1987-01-01
The introduction of the supercomputer and recent advances in both Reynolds averaged, and large eddy simulation fluid flow approximation techniques to the Navier-Stokes equations, have created a robust environment for the exploration of problems of interest to the Navy in general, and the Naval Underwater Systems Center in particular. The nature of problems that are of interest, and the type of resources needed for their solution are addressed. The goal is to achieve a good engineering solution to the fluid-structure interaction problem. It is appropriate to indicate that a paper by D. Champman played a major role in developing the interest in the approach discussed.
PATHFINDER: Probing Atmospheric Flows in an Integrated and Distributed Environment
NASA Technical Reports Server (NTRS)
Wilhelmson, R. B.; Wojtowicz, D. P.; Shaw, C.; Hagedorn, J.; Koch, S.
1995-01-01
PATHFINDER is a software effort to create a flexible, modular, collaborative, and distributed environment for studying atmospheric, astrophysical, and other fluid flows in the evolving networked metacomputer environment of the 1990s. It uses existing software, such as HDF (Hierarchical Data Format), DTM (Data Transfer Mechanism), GEMPAK (General Meteorological Package), AVS, SGI Explorer, and Inventor to provide the researcher with the ability to harness the latest in desktop to teraflop computing. Software modules developed during the project are available in the public domain via anonymous FTP from the National Center for Supercomputing Applications (NCSA). The address is ftp.ncsa.uiuc.edu, and the directory is /SGI/PATHFINDER.
2014-05-01
University Pittsburgh, PA 15213 1School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. 2Faculdade de Ciências e Tecnologia ...additional examples that are not in the ECOOP paper. This work was partially supported by Fundação para a Ciência e Tecnologia (Portuguese Foundation
A Small, For-Profit College Unnerves Pittsburgh Academics.
ERIC Educational Resources Information Center
Borrego, Anne Marie
2001-01-01
Explores how Pittsburgh's traditional colleges did not fight the for-profit University of Phoenix when it arrived, but are now going after a tiny for-profit institution, Potomac College, that wants to expand into the area. The colleges claim the school unnecessarily duplicates offerings they provide. (EV)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-17
..., Pennsylvania, Authorization of Export Production Activity, Tsudis Chocolate Company (Chocolate Confectionery Bars), Pittsburgh, Pennsylvania On December 4, 2012, Tsudis Chocolate Company, submitted a notification... restriction requiring that all foreign-status liquid chocolate admitted to FTZ 33 must be re-exported. Dated...
31 CFR 351.86 - What is the role of Federal Reserve Banks and Branches?
Code of Federal Regulations, 2011 CFR
2011-07-01
..., Maine, Massachusetts, New Hampshire, New Jersey (northern half), New York, Rhode Island, Vermont, Puerto Rico, Virgin Islands. Federal Reserve Bank, Pittsburgh Branch, 717 Grant Street, Pittsburgh, PA 15219... half), Maryland, Mississippi (southern half), North Carolina, South Carolina, Tennessee (eastern half...
31 CFR 359.71 - What is the role of Federal Reserve Banks and Branches?
Code of Federal Regulations, 2011 CFR
2011-07-01
..., Maine, Massachusetts, New Hampshire, New Jersey (Northern half), New York, Rhode Island, Vermont, Puerto Rico, Virgin Islands. Federal Reserve Bank, Pittsburgh Branch, 717 Grant Street, Pittsburgh, PA 15219...), Maryland, Mississippi (southern half), North Carolina, South Carolina, Tennessee (eastern half), Virginia...
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
Dutta-Moscato, Joyeeta; Gopalakrishnan, Vanathi; Lotze, Michael T.; Becich, Michael J.
2014-01-01
This editorial provides insights into how informatics can attract highly trained students by involving them in science, technology, engineering, and math (STEM) training at the high school level and continuing to provide mentorship and research opportunities through the formative years of their education. Our central premise is that the trajectory necessary to be expert in the emergent fields in front of them requires acceleration at an early time point. Both pathology (and biomedical) informatics are new disciplines which would benefit from involvement by students at an early stage of their education. In 2009, Michael T Lotze MD, Kirsten Livesey (then a medical student, now a medical resident at University of Pittsburgh Medical Center (UPMC)), Richard Hersheberger, PhD (Currently, Dean at Roswell Park), and Megan Seippel, MS (the administrator) launched the University of Pittsburgh Cancer Institute (UPCI) Summer Academy to bring high school students for an 8 week summer academy focused on Cancer Biology. Initially, pathology and biomedical informatics were involved only in the classroom component of the UPCI Summer Academy. In 2011, due to popular interest, an informatics track called Computer Science, Biology and Biomedical Informatics (CoSBBI) was launched. CoSBBI currently acts as a feeder program for the undergraduate degree program in bioinformatics at the University of Pittsburgh, which is a joint degree offered by the Departments of Biology and Computer Science. We believe training in bioinformatics is the best foundation for students interested in future careers in pathology informatics or biomedical informatics. We describe our approach to the recruitment, training and research mentoring of high school students to create a pipeline of exceptionally well-trained applicants for both the disciplines of pathology informatics and biomedical informatics. We emphasize here how mentoring of high school students in pathology informatics and biomedical informatics will be critical to assuring their success as leaders in the era of big data and personalized medicine. PMID:24860688
Dutta-Moscato, Joyeeta; Gopalakrishnan, Vanathi; Lotze, Michael T; Becich, Michael J
2014-01-01
This editorial provides insights into how informatics can attract highly trained students by involving them in science, technology, engineering, and math (STEM) training at the high school level and continuing to provide mentorship and research opportunities through the formative years of their education. Our central premise is that the trajectory necessary to be expert in the emergent fields in front of them requires acceleration at an early time point. Both pathology (and biomedical) informatics are new disciplines which would benefit from involvement by students at an early stage of their education. In 2009, Michael T Lotze MD, Kirsten Livesey (then a medical student, now a medical resident at University of Pittsburgh Medical Center (UPMC)), Richard Hersheberger, PhD (Currently, Dean at Roswell Park), and Megan Seippel, MS (the administrator) launched the University of Pittsburgh Cancer Institute (UPCI) Summer Academy to bring high school students for an 8 week summer academy focused on Cancer Biology. Initially, pathology and biomedical informatics were involved only in the classroom component of the UPCI Summer Academy. In 2011, due to popular interest, an informatics track called Computer Science, Biology and Biomedical Informatics (CoSBBI) was launched. CoSBBI currently acts as a feeder program for the undergraduate degree program in bioinformatics at the University of Pittsburgh, which is a joint degree offered by the Departments of Biology and Computer Science. We believe training in bioinformatics is the best foundation for students interested in future careers in pathology informatics or biomedical informatics. We describe our approach to the recruitment, training and research mentoring of high school students to create a pipeline of exceptionally well-trained applicants for both the disciplines of pathology informatics and biomedical informatics. We emphasize here how mentoring of high school students in pathology informatics and biomedical informatics will be critical to assuring their success as leaders in the era of big data and personalized medicine.
78 FR 22285 - Notice of Inventory Completion: Carnegie Museum of Natural History, Pittsburgh, PA
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-15
....R50000] Notice of Inventory Completion: Carnegie Museum of Natural History, Pittsburgh, PA AGENCY: National Park Service, Interior. ACTION: Notice. SUMMARY: The Carnegie Museum of Natural History has... associated funerary objects should submit a written request to the Carnegie Museum of Natural History. If no...
Building a Digital Library: A Technology Manager's Point of View.
ERIC Educational Resources Information Center
Shaw, Elizabeth J.
2000-01-01
Describes the Historic Pittsburgh project at the University of Pittsburgh, a joint project with the Historical Society of Western Pennsylvania to produce a digital collection of historical materials available on the Internet. Discusses costs; metadata; digitization and preservation of originals; full-text capabilities; scanning; quality review;…
A Precarious Ecstasy: Beyond Temporality in Self and Other
ERIC Educational Resources Information Center
Nolan, Greg
2016-01-01
This article explores Levinas's [1961/1969. "Totality and infinity." (A. Lingis, Trans.). Pittsburgh, PA: Duquesne University Press; 1981/1997. "Otherwise than being or beyond essence." (A. Lingis, Trans.). Pittsburgh, PA: Duquesne University Press] ideas on relational proximity in the face of the Other and, through meeting,…
1981-07-01
Sheffield, AL Arkansas River Fort Chaffee Fort Smith , AR Pine Bluff Arsenal Pine Bluff, AR Gulf Coast East Fort Benning Columbus, GA Middle Atlantic...Pittsburgh District, United States Army Corps of Engineers. Pittsburgh, Pennsylvania: United States Army Corps of Engineers [no date]. MacLeay, Lachlan
76 FR 47993 - Safety Zone; Allegheny River; Pittsburgh, PA
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-08
... hazards associated with the Guyasuta Days Festival fireworks display. Entry into, movement within, and... possible hazards associated with the Guyasuta Days Festival fireworks display that will occur in the city... during the Guyasuta Days Festival fireworks display that will occur in the city of Pittsburgh, PA on...
Code of Federal Regulations, 2011 CFR
2011-07-01
... for practical purposes Pittsburgh natural gas (containing a high percentage of methane) is a...) Unless special features of the lamp prevent ignition of explosive mixtures of methane and air by the... surrounded with explosive mixtures of Pittsburgh natural gas 1 and air. A sufficient number of tests of each...
Code of Federal Regulations, 2014 CFR
2014-07-01
... for practical purposes Pittsburgh natural gas (containing a high percentage of methane) is a...) Unless special features of the lamp prevent ignition of explosive mixtures of methane and air by the... surrounded with explosive mixtures of Pittsburgh natural gas 1 and air. A sufficient number of tests of each...
Code of Federal Regulations, 2013 CFR
2013-07-01
... for practical purposes Pittsburgh natural gas (containing a high percentage of methane) is a...) Unless special features of the lamp prevent ignition of explosive mixtures of methane and air by the... surrounded with explosive mixtures of Pittsburgh natural gas 1 and air. A sufficient number of tests of each...
Mental Health Services in the Pittsburgh Public Schools; 1967-1968.
ERIC Educational Resources Information Center
Richman, Vivien
The 1967-68 mental health services (MHS) program in the Pittsburgh public school system, number of children served, studies undertaken, and other staff activities are considered. A research study of perceptual-motor dysfunction among emotionally disturbed, educable mentally handicapped, and normal children, and two perceptual surveys developed for…
The Use of Research Libraries: A Comment about the Pittsburgh Study and Its Critics.
ERIC Educational Resources Information Center
Peat, W. Leslie
1981-01-01
Reviews the controversy surrounding the Pittsburgh study of library circulation and collection usage and proposes the use of citation analysis techniques as an acceptable method for measuring research use of a research library which will complement circulation studies. Five references are listed. (RAA)
Template Interfaces for Agile Parallel Data-Intensive Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.
Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less
The impact of the U.S. supercomputing initiative will be global
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Dona
2016-01-15
Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
Predicting Hurricanes with Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-01-01
Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron
1992-01-01
Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.
Advances in petascale kinetic plasma simulation with VPIC and Roadrunner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J; Albright, Brian J; Yin, Lin
2009-01-01
VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration andmore » modeling reconnection in magnetic confinement fusion experiments.« less
Supercomputing Sheds Light on the Dark Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Heitmann, Katrin
2012-11-15
At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curran, L.
1988-03-03
Interest has been building in recent months over the imminent arrival of a new class of supercomputer, called the ''supercomputer on a desk'' or the single-user model. Most observers expected the first such product to come from either of two startups, Ardent Computer Corp. or Stellar Computer Inc. But a surprise entry has shown up. Apollo Computer Inc. is launching a new work station this week that racks up an impressive list of industry first as it puts supercomputer power at the disposal of a single user. The new series 10000 from the Chelmsford, Mass., a company is built aroundmore » a reduced-instruction-set architecture that the company calls Prism, for parallel reduced-instruction-set multiprocessor. This article describes the 10000 and Prism.« less
NASA Technical Reports Server (NTRS)
Murman, E. M. (Editor); Abarbanel, S. S. (Editor)
1985-01-01
Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.
Debate: Open radical prostatectomy vs. laparoscopic vs. robotic.
Nelson, Joel B
2007-01-01
Surgical removal of clinically localized prostate cancer remains the most definitive treatment for the disease. The emergence of laparoscopic and robotic radical prostatectomy (RP) as alternatives to open RP has generated considerable discussion about the real and relative merits of each approach. Such was the topic of a debate that took place during the 2006 Society of Urologic Oncology meeting. The participants were Dr. William Catalona, Northwestern University, advocating for open RP, Dr. Betrand Guillonneau, Memorial Sloan Kettering Cancer Center advocating for laparoscopic RP, and Dr. Mani Menon, Henry Ford Hospital, advocating for robotic RP. The debate was moderated by Dr. Joel Nelson, University of Pittsburgh. This paper summarizes that debate.
Optically powered remote gas monitor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubaniewicz, T.H. Jr.; Chilton, J.E.
1995-12-31
Many mines rely on toxic gas sensors to help maintain a safe and healthy work environment. This report describes a prototype monitoring system developed by the US Bureau of Mines (USBM) that uses light to power and communicate with several remote toxic gas sensors. The design is based on state-of-art optical-to-electrical power converters, solid-state diode lasers, and fiber optics. This design overcomes several problems associated with conventional wire-based systems by providing complete electrical isolation between the remote sensors and the central monitor. The prototype performed well during a 2-week field trial in the USBM Pittsburgh Research Center Safety Research Coalmore » Mine.« less
Russell, Thomas P; Lahti, Paul M. (PHaSE - Polymer-Based Materials for Harvesting Solar Energy); PHaSE Staff
2017-12-09
'Solar Cells from Plastics? Mission Possible at the PHaSE Energy Research Center, UMass Amherst' was submitted by the Polymer-Based Materials for Harvesting Solar Energy (PHaSE) EFRC to the 'Life at the Frontiers of Energy Research' video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. PHaSE, an EFRC co-directed by Thomas P. Russell and Paul M. Lahti at the University of Massachusetts, Amherst, is a partnership of scientists from six institutions: UMass (lead), Oak Ridge National Laboratory, Pennyslvania State University, Rensselaer Polytechnic Institute, and the University of Pittsburgh. The Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pentzer, Emily
"Solar Cells from Plastics? Mission Possible at the PHaSE Energy Research Center, UMass Amherst" was submitted by the Polymer-Based Materials for Harvesting Solar Energy (PHaSE) EFRC to the "Life at the Frontiers of Energy Research" video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. PHaSE, an EFRC co-directed by Thomas P. Russell and Paul M. Lahti at the University of Massachusetts, Amherst, is a partnership of scientists from six institutions: UMass (lead), Oak Ridge National Laboratory, Pennsylvania Statemore » University, Rensselaer Polytechnic Institute, and the University of Pittsburgh. The Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges.« less
A Discussion of the Pittsburgh Reading Conference Papers.
ERIC Educational Resources Information Center
Samuels, S. J.
Reviews and evaluative comments concerning the 11 papers read during the April 1976 portion of the Pittsburgh conference on the theory and practice of beginning reading are included in this document. Before the papers are reviewed, information is presented on some questions posed at the conference within the context of two issues that were raised…
An Evaluation of the Pittsburgh Reading is FUNdamental Program.
ERIC Educational Resources Information Center
Boldovici, John A.; And Others
A study of one of the model "Reading is FUNdamental" (RIF) programs located in Pittsburgh, Pennsylvania, was made to determine the success of the program and to formulate suggestions for changes. RIF is a program in which free or inexpensive books are made available in a community through schools, libraries, and other local organizations…
University Urban Interface Study. The Pittsburgh Goals Study: A Summary.
ERIC Educational Resources Information Center
Nehnevajsa, Jiri; Coleman, Alan N.
The main purpose of this study was to determine the extent to which community consensus existed regarding a variety of major changes in Pittsburgh and the extent to which widely differing perspectives of community leaders might contribute to conflict, or at least significant difficulties, on these issues. A pragmatic secondary objective was to…
Proposed Missions and Organization of the U.S. Army Research, Development and Engineering Command
2005-01-01
successfully employed in many industries and by many companies, including Pittsburgh Steel, IBM, Unilever , and Ford. Each of these organizations fine- tuned...employed by many companies, such as Pittsburgh Steel, IBM, Unilever , and Ford, which have fine-tuned the matrix to suit their particular goals and
1992-09-24
California, and MagLev , Inc., in 2 Pittsburgh, Pennsylvania. 3 Also, panel recommendations from the New York Defense Spending and 4 Impact Report...enhancement and job development. Examples of these consortiums include Calstart in Los Angeles, Ca. and Maglev , Inc. in Pittsburgh, Pa. - Panel
Higher Education Sustainability in Pittsburgh: Highlights from the AASHE 2011 Campus Tours
ERIC Educational Resources Information Center
Srinivasamohan, Ashwini; Walton, Judy; Wagner, Margo
2012-01-01
This quote by ecologist, "Silent Spring" author and Chatham University alum Rachel Carson reminds us of the everyday tenacity needed in working to advance a sustainable and just world. This publication celebrates that tenacity in the higher education sector, specifically among institutions in the Pittsburgh area. Historically known for…
An Experimental System for Research on Dynamic Skills Training.
1981-09-01
Bogey to be intercepted. The student enters B1 . The system then displays a recommended intercept heading, say 270 degrees. The student must now send this...DRIVE LRDC OTTAWA, CANADA K1A 0K2 UNIVERSITY OF PITTSBURGH 3939 O’HARA STREET ERIC Facility-Acquisitions PITTSBURGH, PA 15213 4833 Rugby Avenue Bethesda
Sex and Age Differences in the Risk Threshold for Delinquency
ERIC Educational Resources Information Center
Wong, Thessa M. L.; Loeber, Rolf; Slotboom, Anne-Marie; Bijleveld, Catrien C. J. H.; Hipwell, Alison E.; Stepp, Stephanie D.; Koot, Hans M.
2013-01-01
This study examines sex differences in the risk threshold for adolescent delinquency. Analyses were based on longitudinal data from the Pittsburgh Youth Study (n = 503) and the Pittsburgh Girls Study (n = 856). The study identified risk factors, promotive factors, and accumulated levels of risks as predictors of delinquency and nondelinquency,…
A Bold Move: Reframing Composition through Assessment Design
ERIC Educational Resources Information Center
Condran, Jeffrey
2010-01-01
This article discusses the decision of the Art Institute of Pittsburgh (AiP) in Pittsburgh, Pennsylvania to implement a rigorous writing program assessment in order to obtain the Middle States accreditation, and it describes the process of determining which assessment model would be the most appropriate for AiP's needs. The use of a quantitative…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... Confectionery Bars) Pittsburgh, PA Tsudis Chocolate Company (Tsudis), an operator of FTZ 33, submitted a... the facility would involve the production of chocolate confectionery bars for export (no shipments for... markets, FTZ procedures could exempt Tsudis from customs duty payments on the foreign status material used...
ERIC Educational Resources Information Center
Kearns, Kevin P.
2014-01-01
The Nonprofit Clinic at the University of Pittsburgh gives graduate students the opportunity to serve as management consultants to nonprofit organizations. This article describes the learning objectives, logistics, and outcomes of the Nonprofit Clinic. Bloom's 1956 taxonomy of learning objectives is employed to assess learning outcomes.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... Pittsburgh & West Virginia Railroad (PWV) and Wheeling & Lake Erie Railway Company (WLE) (collectively... Abandonments and Discontinuances of Service for PWV to abandon, and for WLE to discontinue its sublease rights... Service Zip Code 15220. Currently, PWV leases the line to Norfolk Southern Railway Company (NSR), which in...
Truth, Love and Campus Expansion. The University of Pittsburgh Experience.
ERIC Educational Resources Information Center
Shaw, Paul C.
This document provides a descriptive analysis of the University of Pittsburgh's experience with campus expansion during a 2 1/2 year period from fall 1970 to spring 1973. Part one describes a background and overview of campus expansion, a description of selective Oakland demographic characteristics, a discussion of the first major conflict with…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
..., Amdt 2 Oklahoma City, OK, Will Rogers World, Takeoff Minimums and Obstacle DP, Amdt 1 Portland, OR... CLOSE PARALLEL), Amdt 4, CANCELED Philadelphia, PA, Philadelphia Intl, ILS PRM RWY 27L (SIMULTANEOUS CLOSE PARALLEL), Amdt 3, CANCELED Pittsburgh, PA, Allegheny County, ILS OR LOC RWY 10, Amdt 6 Pittsburgh...
Pittsburgh Area Preschool Association Publication: Selected Articles (Volume 8, No. 1-4).
ERIC Educational Resources Information Center
Frank, Mary, Ed.
This compilation of short reports distributed to preschool teachers in the Pittsburgh area covers four main topics: (1) Adoption (2) Expressive Art Therapy, (3) The Infant, and (4) Learning Disorders in Young Children. The adoption section includes reports pertaining to the adoption process in Pennsylvania, adoptive parents' legal rights, medical…
None
2018-05-01
A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed "Ice Storm" this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.
Porting Ordinary Applications to Blue Gene/Q Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy
2015-08-31
Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less
The Tera Multithreaded Architecture and Unstructured Meshes
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Mavriplis, Dimitri J.
1998-01-01
The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.
Amyloid tracers detect multiple binding sites in Alzheimer's disease brain tissue.
Ni, Ruiqing; Gillberg, Per-Göran; Bergfors, Assar; Marutle, Amelia; Nordberg, Agneta
2013-07-01
Imaging fibrillar amyloid-β deposition in the human brain in vivo by positron emission tomography has improved our understanding of the time course of amyloid-β pathology in Alzheimer's disease. The most widely used amyloid-β imaging tracer so far is (11)C-Pittsburgh compound B, a thioflavin derivative but other (11)C- and (18)F-labelled amyloid-β tracers have been studied in patients with Alzheimer's disease and cognitively normal control subjects. However, it has not yet been established whether different amyloid tracers bind to identical sites on amyloid-β fibrils, offering the same ability to detect the regional amyloid-β burden in the brains. In this study, we characterized (3)H-Pittsburgh compound B binding in autopsied brain regions from 23 patients with Alzheimer's disease and 20 control subjects (aged 50 to 88 years). The binding properties of the amyloid tracers FDDNP, AV-45, AV-1 and BF-227 were also compared with those of (3)H-Pittsburgh compound B in the frontal cortices of patients with Alzheimer's disease. Saturation binding studies revealed the presence of high- and low-affinity (3)H-Pittsburgh compound B binding sites in the frontal cortex (K(d1): 3.5 ± 1.6 nM; K(d2): 133 ± 30 nM) and hippocampus (K(d1):5.6 ± 2.2 nM; K(d2): 181 ± 132 nM) of Alzheimer's disease brains. The relative proportion of high-affinity to low-affinity sites was 6:1 in the frontal cortex and 3:1 in the hippocampus. One control showed both high- and low-affinity (3)H-Pittsburgh compound B binding sites (K(d1): 1.6 nM; K(d2): 330 nM) in the cortex while the others only had a low-affinity site (K(d2): 191 ± 70 nM). (3)H-Pittsburgh compound B binding in Alzheimer's disease brains was higher in the frontal and parietal cortices than in the caudate nucleus and hippocampus, and negligible in the cerebellum. Competitive binding studies with (3)H-Pittsburgh compound B in the frontal cortices of Alzheimer's disease brains revealed high- and low-affinity binding sites for BTA-1 (Ki: 0.2 nM, 70 nM), florbetapir (1.8 nM, 53 nM) and florbetaben (1.0 nM, 65 nM). BF-227 displaced 83% of (3)H-Pittsburgh compound B binding, mainly at a low-affinity site (311 nM), whereas FDDNP only partly displaced (40%). We propose a multiple binding site model for the amyloid tracers (binding sites 1, 2 and 3), where AV-45 (florbetapir), AV-1 (florbetaben), and Pittsburgh compound B, all show nanomolar affinity for the high-affinity site (binding site 1), as visualized by positron emission tomography. BF-227 shows mainly binding to site 3 and FDDNP shows only some binding to site 2. Different amyloid tracers may provide new insight into the pathophysiological mechanisms in the progression of Alzheimer's disease.
STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.
Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X
2009-08-01
This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.
Japanese project aims at supercomputer that executes 10 gflops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burskey, D.
1984-05-03
Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less
Attending to the Noise: Applying Chaos Theory to School Reform.
ERIC Educational Resources Information Center
Wertheimer, Richard; Zinga, Mario
The Common Knowledge: Pittsburgh (CK:P), a technology-based project, introduced the Internet into all levels of the Pittsburgh Public Schools during 1993-97. This is a case study of the ideology, strategies, and process of the CK:P project describes the project's activities, examines the project in light of school-reform literature, and uses its…
2004-06-01
Department of EE and Computer Science University of Michigan Ann Arbor, MI 48109, USA pollackm@eecs.umich.edu Sujata Banerjeez Info. Sci. & Telecom. Dept...University of Pittsburgh Pittsburgh, PA 15260, USA sujata @tele.pitt.edu Abstract An important aspect of Business to Business E- Commerce is the agile
ERIC Educational Resources Information Center
Genest, Maria T.
2014-01-01
Public libraries have long supported the literacy goals of public schools in their communities by providing access to printed and electronic resources that enhance learning and teaching. This article describes an ongoing collaboration between the Carnegie Library of Pittsburgh's BLAST outreach program and the Pittsburgh Public Schools that has…
The Pittsburgh Girls Study: Overview and Initial Findings
ERIC Educational Resources Information Center
Keenan, Kate; Hipwell, Alison; Chung, Tammy; Stepp, Stephanie; Stouthamer-Loeber, Magda; Loeber, Rolf; McTigue, Kathleen
2010-01-01
The Pittsburgh Girls Study is a longitudinal, community-based study of 2,451 girls who were initially recruited when they were between the ages of 5 and 8 years. The primary aim of the study was testing developmental models of conduct disorder, major depressive disorder, and their co-occurrence in girls. In the current article, we summarize the…
THE QUEST FOR RACIAL EQUALITY IN THE PITTSBURGH PUBLIC SCHOOLS. ANNUAL REPORT, 1965.
ERIC Educational Resources Information Center
Pittsburgh Board of Public Education, PA.
THIS REPORT FURNISHES AN ACCOUNT OF THE POLICIES OF THE PITTSBURGH PUBLIC SCHOOLS ON RACIAL INTEGRATION AND EQUALITY OF EDUCATIONAL OPPORTUNITY. PRINCIPLES, PRACTICES, AND PLANS FOR THE FUTURE ARE DETAILED, AND SPECIAL PROBLEM AREAS ARE IDENTIFIED. COMPENSATORY EDUCATION, EVEN IF IT IMPLIES DELAYED INTEGRATION IN SOME INSTANCES, IS SEEN AS THE…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
..., Inc.--Corporate Family Merger Exemption-- Buffalo, Rochester and Pittsburgh Company CSX Transportation... jointly filed a verified notice of exemption under 49 CFR 1180.2(d)(3) for a corporate family transaction... intends to merge BR&P into CSXT on or after that date. This is a transaction within a corporate family of...
This research investigated different strategies for source apportionment of airborne fine particulate matter (PM2.5) collected as part of the Pittsburgh Air Quality Study. Two source receptor models were used, the EPA Chemical Mass Balance 8.2 (CMB) and EPA Positive Matrix Facto...
ERIC Educational Resources Information Center
University of Pittsburgh Office of Child Development, 2011
2011-01-01
This Special Report discusses how the Allegheny County Department of Human Services and Pittsburgh Public Schools took a major step toward closing a knowledge gap that prevents schools and human service agencies around the country from developing a deeper understanding of the children in their systems and collaborating on more effective, better…
ERIC Educational Resources Information Center
Camaioni, Nicole
2013-01-01
The overall purpose of this study was to capture the relationships made during the Campus Canines Program, an animal-assisted activity program, at the University of Pittsburgh. Meaningful social relationships create greater educational satisfaction. These social relationships are an important piece to creating and sustaining student involvement,…
ERIC Educational Resources Information Center
McClure, Kevin R.
1996-01-01
Analyzes a case where a group of local steelworkers in the Pittsburgh, Pennsylvania area, in conjunction with a small group of Protestant ministers, sought to gain public support for the redress of grievances, and how their subordination by the United Steel Workers' union and the Lutheran Church influenced their rhetorical activities. (SR)
ASTEC: Controls analysis for personal computers
NASA Technical Reports Server (NTRS)
Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.
1989-01-01
The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.
Visualizing the Big (and Large) Data from an HPC Resource
NASA Astrophysics Data System (ADS)
Sisneros, R.
2015-10-01
Supercomputers are built to endure painfully large simulations and contend with resulting outputs. These are characteristics that scientists are all too willing to test the limits of in their quest for science at scale. The data generated during a scientist's workflow through an HPC center (large data) is the primary target for analysis and visualization. However, the hardware itself is also capable of generating volumes of diagnostic data (big data); this presents compelling opportunities to deploy analogous analytic techniques. In this paper we will provide a survey of some of the many ways in which visualization and analysis may be crammed into the scientific workflow as well as utilized on machine-specific data.
Japanese supercomputer technology.
Buzbee, B L; Ewald, R H; Worlton, W J
1982-12-17
Under the auspices of the Ministry for International Trade and Industry the Japanese have launched a National Superspeed Computer Project intended to produce high-performance computers for scientific computation and a Fifth-Generation Computer Project intended to incorporate and exploit concepts of artificial intelligence. If these projects are successful, which appears likely, advanced economic and military research in the United States may become dependent on access to supercomputers of foreign manufacture.
Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance
Zgurskaya, Helen; Smith, Jeremy
2018-06-13
ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.
Coal-fluid properties with an emphasis on dense phase. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klinzing, G.E.
1985-04-01
Many fossil fuel energy processes depend on the movement of solids by pneumatic transport. Despite the considerable amount of work reported in the literature on pneumatic transport, the design of new industrial systems for new products continues to rely to a great extent on empiricism. A pilot-scale test facility has been constructed at Pittsburgh Energy Technology Center (PETC), equipped with modern sophisticated measuring techniques (such as Pressure Transducers, Auburn Monitors and Micro Motion Mass Flow Meters) and an automatic computer-controlled data acquisition system to study the effects of particle pneumatic transport. Pittsburgh Seam and Montana Rosebud coals of varying sizemore » consist and moisture content were tested in the atmospheric and pressurized coal flow test loops (AP/CFTL and HP/CFTL) at PETC. The system parameters included conveying gas velocity, injector tank pressure, screw conveyor speed, pipe radius and pipe bends. In this report, results from the coal flow tests were presented and analyzed. Existing theories and correlations on two phase flows were reviewed. Experimental data were compared with values calculated from empirically or theoretically derived equations available in the literature and new correlations were proposed, when applicable, to give a better interpretation of the data and a better understanding of the various flow regimes involved in pneumatic transport. 55 refs., 56 figs., 6 tabs.« less
Adamo, D; Ruoppo, E; Leuci, S; Aria, M; Amato, M; Mignogna, M D
2015-02-01
The psychological factors and their association with chronic inflammatory disease, aren't well recognized, yet their importance in oral lichen planus is still debated. The aim of this study was to investigate the prevalence of sleep disturbances, anxiety, depression and their association in patient with oral lichen planus. 50 patients with oral lichen planus vs. equal number of age and sex-matched healthy controls were enrolled. Questionnaires examining insomnia symptoms, excessive daytime sleepiness (Pittsburgh sleep quality index and Epworth aleepiness scale) depression and anxiety (The Hamilton rating scale for Depression and Anxiety) were used. The patients with oral lichen planus had statistically higher scores in all items of the Pittsburgh sleep quality index, the Hamilton rating scale for depression and anxiety and Epworth sleepiness scale than the healthy controls. The median and inter-quartile range of the Pittsburgh sleep quality index was 5-2 and for the oral lichen planus patients and 4-2 for the healthy controls (P < 0.011). In the study group, a depressed mood and anxiety correlated positively with sleep disturbances. The Pearson correlations were 0.76 for Pittsburgh sleep quality Index vs. Hamilton rating scale for depression (P < 0.001) and 0.77 for Pittsburgh sleep quality Index vs. Hamilton rating scale for anxiety (P < 0.001). Oral lichen planus patients report a greater degree of sleep problems, depressed mood and anxiety as compared with controls. We suggest to screen sleep disturbances in patients with oral lichen planus because they could be considered a prodromal symptoms of mood disorders. © 2014 European Academy of Dermatology and Venereology.
Aviation Research and the Internet
NASA Technical Reports Server (NTRS)
Scott, Antoinette M.
1995-01-01
The Internet is a network of networks. It was originally funded by the Defense Advanced Research Projects Agency or DOD/DARPA and evolved in part from the connection of supercomputer sites across the United States. The National Science Foundation (NSF) made the most of their supercomputers by connecting the sites to each other. This made the supercomputers more efficient and now allows scientists, engineers and researchers to access the supercomputers from their own labs and offices. The high speed networks that connect the NSF supercomputers form the backbone of the Internet. The World Wide Web (WWW) is a menu system. It gathers Internet resources from all over the world into a series of screens that appear on your computer. The WWW is also a distributed. The distributed system stores data information on many computers (servers). These servers can go out and get data when you ask for it. Hypermedia is the base of the WWW. One can 'click' on a section and visit other hypermedia (pages). Our approach to demonstrating the importance of aviation research through the Internet began with learning how to put pages on the Internet (on-line) ourselves. We were assigned two aviation companies; Vision Micro Systems Inc. and Innovative Aerodynamic Technologies (IAT). We developed home pages for these SBIR companies. The equipment used to create the pages were the UNIX and Macintosh machines. HTML Supertext software was used to write the pages and the Sharp JX600S scanner to scan the images. As a result, with the use of the UNIX, Macintosh, Sun, PC, and AXIL machines, we were able to present our home pages to over 800,000 visitors.
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
Outcome of 1000 liver cancer patients evaluated at the UPMC Liver Cancer Center.
Geller, David A; Tsung, Allan; Marsh, J Wallis; Dvorchik, Igor; Gamblin, T Clark; Carr, Brian I
2006-01-01
We evaluated 1000 consecutive patients with liver tumors at the University of Pittsburgh Medical Center (UPMC) Liver Cancer Center over the 4-year period from August 2000 to August 2004. Of the 1000 patients seen, 573 had primary liver cancer and 427 had metastatic cancer to the liver. The mean age of the patients evaluated was 62.2 years, and 61% were male. Treatment consisted of a liver surgical procedure (resection or radiofrequency ablation) in 369 cases (36.9%), hepatic intra-arterial regional therapy (transarterial chemoembolization or (90)yttrium microspheres) in 524 cases (52.4%), systemic chemotherapy in 35 cases (3.5%), and palliative care in 72 patients (7.2%). For treated patients, median survival was 884 days for those undergoing resection/radiofrequency ablation, compared to 295 days with regional therapy. These data indicate that over 90% of patients with liver cancer evaluated at a tertiary referral center can be offered some form of therapy. Survival rates are superior with a liver resection or ablation procedure, which is likely consistent with selection bias. Hepatocellular carcinoma was the most common tumor seen due to referral pattern and screening of hepatitis patients at a major liver transplant center. The most common reason for offering palliative care was hepatic insufficiency usually associated with cirrhosis.
Sexism in Textbooks in Pittsburgh Public Schools, Grades K-5.
ERIC Educational Resources Information Center
Scardina, Florence
Thirty-six textbooks used by the Pittsburgh public schools at grade levels K-5 were reviewed to see how they treat girls vs. boys and men vs. women. Language, reading, science, social studies, and mathematics texts were evaluated. Blatant sexism is found in all areas. Different ideas of behavior and mores are propagated for boys than for girls;…
Teacher Quality Roadmap: Improving Policies and Practices in Pittsburgh Public Schools
ERIC Educational Resources Information Center
Gonzalez, Angel; Kumar, Sudipti; Waymack, Nancy
2014-01-01
The Pittsburgh Public Schools study is the 12th district study since the National Council on Teacher Quality (NCTQ) began studying districts in-depth in 2009. The intent of these studies is to give select communities a comprehensive look at what is happening in their local school districts that may be either helping or hurting teacher quality, and…
ERIC Educational Resources Information Center
Ruetrakul, Pimon
The perceptions of Southeast Asian graduate students at the University of Pittsburgh were explored. The students were asked to elucidate the problems which arose as they and their families adapted to the American experience. Findings are reported in seven categories. They are the following: (1) academic expenses did not cause financial problems…
Personal Monitoring for Ambulatory Post-Traumatic Stress Disorder Assessment
2009-10-01
M) ? Profile of mood states (mini-POMS) ? Impact of event scale - revised (IES-R) ? Beck anxiety inventory (BAI) ? Perceived stress scale (PSS...Pittsburgh sleep quality index (PSQI) ? Hospital anxiety and depression scale (HADS) ? Sheehan Disability Scale (SDS) 6...psychological and social measures are available for investigator selection, such as the PTSD Checklist – Military, Pittsburgh Sleep Quality Index, Beck Anxiety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padgett, P.L.; Hower, J.C.
1996-12-31
Five coals representing four distinct coal sources blended at a midwestern power station were subjected to detailed analysis of their Hardgrove grindability. The coals are: a low-sulfur, high volatile A bituminous Upper Elkhorn No. 3 coal (Pike County, KY); a medium-sulfur, high volatile A bituminous Pittsburgh coal (southwestern PA); a low-sulfur, subbituminous Wyodak coal from two mines in the eastern Powder River Basin (Campbell County, WY). The feed and all samples processed in the Hardgrove grindability test procedure were analyzed for their maceral and microlithotype content. The high-vitrinite Pittsburgh coal and the relatively more petrographically complex Upper Elkhorn No. 3more » coal exhibit differing behavior in grindability. The Pittsburgh raw feed, 16x30 mesh fraction (HGI test fraction), and the {minus}30 mesh fraction (HGI reject) are relatively similar petrographically, suggesting that the HGI test fraction is reasonably representative of the whole feed. The eastern Kentucky coal is not as representative of the whole feed, the HGI test fraction having lower vitrinite than the rejected {minus}30 mesh fraction. The Powder River Basin coals are high vitrinite and show behavior similar to the Pittsburgh coal.« less
Factors associated with poor sleep quality in women with cancer.
Mansano-Schlosser, Thalyta Cristina; Ceolim, Maria Filomena
2017-03-02
to analyze the factors associated with poor sleep quality, its characteristics and components in women with breast cancer prior to surgery for removing the tumor and throughout the follow-up. longitudinal study in a teaching hospital, with a sample of 102 women. The following were used: a questionnaire for sociodemographic and clinical characterization, the Pittsburgh Sleep Quality Index; the Beck Depression Inventory; and the Herth Hope Scale. Data collection covered from prior to the surgery for removal of the tumor (T0) to T1, on average 3.2 months; T2, on average 6.1 months; and T3, on average 12.4 months. Descriptive statistics and the Generalized Estimating Equations model were used. depression and pain contributed to the increase in the score of the Pittsburgh Sleep Quality Index, and hope, to the reduction of the score - independently - throughout follow-up. Sleep disturbances were the component with the highest score throughout follow-up. the presence of depression and pain, prior to the surgery, contributed to the increase in the global score of the Pittsburgh Sleep Quality Index, which indicates worse quality of sleep throughout follow-up; greater hope, in its turn, influenced the reduction of the score of the Pittsburgh Sleep Quality Index. analizar los factores asociados a la mala calidad del sueño, sus características y componentes en mujeres con cáncer de mama, antes de la cirugía de retirada del tumor y a lo largo del seguimiento. estudio longitudinal, en hospital universitario con muestra de 102 mujeres. Fueron utilizados: un cuestionario de caracterización sociodemográfica y clínica; el Índice de Calidad del Sueño de Pittsburgh; el Inventario de Depresión de Beck; y la Escala de Esperanza de Herth. La recolección comprendió los momentos: antes de la cirugía de retirada del tumor (T0), en (T1) en promedio 3,2 meses, en (T2) en promedio 6,1 meses y en (T3) en promedio 12,4 meses. Se utilizó estadística descriptiva y el modelo de Ecuaciones de Estimación Generalizada. la depresión y el dolor contribuyeron para el aumento del puntaje del Índice de Calidad del Sueño de Pittsburgh y la esperanza para la reducción del puntaje, de manera independiente, a lo largo del seguimiento. Los trastornos del sueño fueron el componente con puntuación más elevada, a lo largo del seguimiento. la presencia de la depresión y del dolor, previos a la cirugía, contribuyeron para el aumento del puntaje global del Índice de Calidad del Sueño de Pittsburgh, lo que indica peor calidad del sueño, a lo largo del seguimiento y, la mayor esperanza, a su vez, influenció en la reducción del puntaje del Índice de Calidad del Sueño de Pittsburgh. analisar os fatores associados à má qualidade do sono, suas características e componentes em mulheres com câncer de mama antes da cirurgia de retirada do tumor e ao longo do seguimento. estudo longitudinal, em hospital universitário com amostra de 102 mulheres. Foram utilizados: questionário de caracterização sociodemográfica e clínica, Índice de Qualidade do Sono de Pittsburgh; Inventário de Depressão de Beck; Escala de Esperança de Herth. Coleta compreendeu antes da cirurgia de retirada do tumor (T0) em T1, em média 3,2 meses; T2, em média 6,1 meses; T3, em média 12,4 meses. Utilizou-se estatística descritiva e o modelo de Equações de Estimação Generalizada. a depressão e a dor contribuíram para o aumento do escore do Índice de Qualidade de Sono de Pittsburgh, e a esperança, para a redução do escore, de maneira independente, ao longo do seguimento. Os transtornos do sono foram o componente com pontuação mais elevada, ao longo do seguimento. a presença de depressão e de dor, previamente à cirurgia, contribuiu para o aumento do escore global do Índice de Qualidade do Sono de Pittsburgh, o que indica pior qualidade do sono, ao longo do seguimento e, a maior esperança, por sua vez, influenciou na redução do escore do Índice de Qualidade do Sono de Pittsburgh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Under contract with the US Department of Energy (DE-AC22-92PCO0367), Pittsburgh Energy Technology Center, Radian Corporation has conducted a test program to collect and analyze size-fractionated stack gas particulate samples for selected inorganic hazardous air pollutants (HAPS). Specific goals of the program are (1) the collection of one-gram quantities of size-fractionated stack gas particulate matter for bulk (total) and surface chemical charactization, and (2) the determination of the relationship between particle size, bulk and surface (leachable) composition, and unit load. The information obtained from this program identifies the effects of unit load, particle size, and wet FGD system operation on themore » relative toxicological effects of exposure to particulate emissions.« less
CFD applications: The Lockheed perspective
NASA Technical Reports Server (NTRS)
Miranda, Luis R.
1987-01-01
The Numerical Aerodynamic Simulator (NAS) epitomizes the coming of age of supercomputing and opens exciting horizons in the world of numerical simulation. An overview of supercomputing at Lockheed Corporation in the area of Computational Fluid Dynamics (CFD) is presented. This overview will focus on developments and applications of CFD as an aircraft design tool and will attempt to present an assessment, withing this context, of the state-of-the-art in CFD methodology.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.
A Layered Solution for Supercomputing Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.