Supercomputing Drives Innovation - Continuum Magazine | NREL
years, NREL scientists have used supercomputers to simulate 3D models of the primary enzymes and Scientist, discuss a 3D model of wind plant aerodynamics, showing low velocity wakes and impact on
NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-09-01
NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less
Richard P. Feynman Center for Innovation
Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation self-healing, self-forming mesh network of long range radios. READ MORE supercomputer Los Alamos
A Layered Solution for Supercomputing Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.
A Layered Solution for Supercomputing Storage
Grider, Gary
2018-06-13
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storageâbased on inexpensive, failure-prone disk drivesâbetween disk drives and tape archives.
NASA Astrophysics Data System (ADS)
Hecht, K. T.
2012-12-01
This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University
US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-01-01
The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.
Green Supercomputing at Argonne
Beckman, Pete
2018-02-07
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputingâeverything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/
Supercomputing Sheds Light on the Dark Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Heitmann, Katrin
2012-11-15
At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less
1993 Gordon Bell Prize Winners
NASA Technical Reports Server (NTRS)
Karp, Alan H.; Simon, Horst; Heller, Don; Cooper, D. M. (Technical Monitor)
1994-01-01
The Gordon Bell Prize recognizes significant achievements in the application of supercomputers to scientific and engineering problems. In 1993, finalists were named for work in three categories: (1) Performance, which recognizes those who solved a real problem in the quickest elapsed time. (2) Price/performance, which encourages the development of cost-effective supercomputing. (3) Compiler-generated speedup, which measures how well compiler writers are facilitating the programming of parallel processors. The winners were announced November 17 at the Supercomputing 93 conference in Portland, Oregon. Gordon Bell, an independent consultant in Los Altos, California, is sponsoring $2,000 in prizes each year for 10 years to promote practical parallel processing research. This is the sixth year of the prize, which Computer administers. Something unprecedented in Gordon Bell Prize competition occurred this year: A computer manufacturer was singled out for recognition. Nine entries reporting results obtained on the Cray C90 were received, seven of the submissions orchestrated by Cray Research. Although none of these entries showed sufficiently high performance to win outright, the judges were impressed by the breadth of applications that ran well on this machine, all nine running at more than a third of the peak performance of the machine.
Impacts | Computational Science | NREL
Impacts Impacts Read about the impacts of NREL's innovations in computational science. Awards community. Photo of the Peregrine supercomputer 2014 R&D 100 Award and R&D Magazine Editor's Choice
NASA Advanced Supercomputing Facility Expansion
NASA Technical Reports Server (NTRS)
Thigpen, William W.
2017-01-01
The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
; Getting ready for the Northern New Mexico RoboRAVE on March 7; Today's tuberculosis; Lab supercomputer Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale training program; Fighting tuberculosis with better diagnostics; Santa Fe's Fiesta Queen... Connections
Adventures in supercomputing: An innovative program for high school teachers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, C.E.; Hicks, H.R.; Summers, B.G.
1994-12-31
Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials ismore » remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).« less
Science & Technology Review November 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radousky, H
This months issue has the following articles: (1) Expanded Supercomputing Maximizes Scientific Discovery--Commentary by Dona Crawford; (2) Thunder's Power Delivers Breakthrough Science--Livermore's Thunder supercomputer allows researchers to model systems at scales never before possible. (3) Extracting Key Content from Images--A new system called the Image Content Engine is helping analysts find significant but hard-to-recognize details in overhead images. (4) Got Oxygen?--Oxygen, especially oxygen metabolism, was key to evolution, and a Livermore project helps find out why. (5) A Shocking New Form of Laserlike Light--According to research at Livermore, smashing a crystal with a shock wave can result in coherent light.
Argonne wins four R&D 100 Awards | Argonne National Laboratory
. High-Energy Concentration-Gradient Cathode Material for Plug-in Hybrids and All-Electric Vehicles converting discovery science into innovative, high-impact products, processes and systems." Globus scientific facilities (such as supercomputing centers and high energy physics experiments), cloud storage
Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide
Tang, William; Wang, Bei; Ethier, Stephane; ...
2016-11-01
The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less
NREL Receives Editors' Choice Awards for Supercomputer Research | News |
function," Beckham said. "We followed up these molecular simulations with experimental work to Award. The awards recognize outstanding research in computational molecular science and engineering Mechanisms of Cellulose-Active Enzymes Using Molecular Simulation" at the AIChE 2014 Annual Meeting
Impacting Innovation and Commercialization: NREL's Partnering Facilities
Apollo 8000 System based on the ESIF's Peregrine supercomputer. The system uses component-level warm advantage." "NREL is the partner we needed and wanted for the first-born in the Apollo family ," said Nic Dubé, Peregrine's system architect and HP's technical lead for Apollo. "From the
NASA Astrophysics Data System (ADS)
Tripathi, Vijay S.; Yeh, G. T.
1993-06-01
Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.
A Look at the Impact of High-End Computing Technologies on NASA Missions
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart
2012-01-01
From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.
Aviation Research and the Internet
NASA Technical Reports Server (NTRS)
Scott, Antoinette M.
1995-01-01
The Internet is a network of networks. It was originally funded by the Defense Advanced Research Projects Agency or DOD/DARPA and evolved in part from the connection of supercomputer sites across the United States. The National Science Foundation (NSF) made the most of their supercomputers by connecting the sites to each other. This made the supercomputers more efficient and now allows scientists, engineers and researchers to access the supercomputers from their own labs and offices. The high speed networks that connect the NSF supercomputers form the backbone of the Internet. The World Wide Web (WWW) is a menu system. It gathers Internet resources from all over the world into a series of screens that appear on your computer. The WWW is also a distributed. The distributed system stores data information on many computers (servers). These servers can go out and get data when you ask for it. Hypermedia is the base of the WWW. One can 'click' on a section and visit other hypermedia (pages). Our approach to demonstrating the importance of aviation research through the Internet began with learning how to put pages on the Internet (on-line) ourselves. We were assigned two aviation companies; Vision Micro Systems Inc. and Innovative Aerodynamic Technologies (IAT). We developed home pages for these SBIR companies. The equipment used to create the pages were the UNIX and Macintosh machines. HTML Supertext software was used to write the pages and the Sharp JX600S scanner to scan the images. As a result, with the use of the UNIX, Macintosh, Sun, PC, and AXIL machines, we were able to present our home pages to over 800,000 visitors.
ERIC Educational Resources Information Center
Arfstrom, Kari M.
2009-01-01
This dissertation describes how entrepreneurial superintendents of educational service agencies (ESAs) recognize, determine and address common and distinct innovative characteristics within emerging or established regional educational environments. Because internal and external factors assist in recognizing innovative practices, this study…
Science & Technology Review June 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A
2012-04-20
This month's issue has the following articles: (1) A New Era in Climate System Analysis - Commentary by William H. Goldstein; (2) Seeking Clues to Climate Change - By comparing past climate records with results from computer simulations, Livermore scientists can better understand why Earth's climate has changed and how it might change in the future; (3) Finding and Fixing a Supercomputer's Faults - Livermore experts have developed innovative methods to detect hardware faults in supercomputers and help applications recover from errors that do occur; (4) Targeting Ignition - Enhancements to the cryogenic targets for National Ignition Facility experiments aremore » furthering work to achieve fusion ignition with energy gain; (5) Neural Implants Come of Age - A new generation of fully implantable, biocompatible neural prosthetics offers hope to patients with neurological impairment; and (6) Incubator Busy Growing Energy Technologies - Six collaborations with industrial partners are using the Laboratory's high-performance computing resources to find solutions to urgent energy-related problems.« less
P2P Technology for High-Performance Computing: An Overview
NASA Technical Reports Server (NTRS)
Follen, Gregory J. (Technical Monitor); Berry, Jason
2003-01-01
The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-28
... Aerospace Innovation in Science and Engineering (RAISE) Award AGENCY: Office of the Secretary, U.S... demonstrate unique, innovative thinking in aerospace science and engineering. With this award, the Secretary... Science and Engineering) Award will recognize innovative scientific and engineering achievements that will...
Energy Innovation Hubs: A Home for Scientific Collaboration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Steven
Secretary Chu will host a live, streaming Q&A session with the directors of the Energy Innovation Hubs on Tuesday, March 6, at 2:15 p.m. EST. The directors will be available for questions regarding their teams' work and the future of American energy. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@hq.doe.gov, prior or during the live event. Dr. Hank Foley is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computermore » modeling. Dr. Douglas Kothe is the director of the Consortium for Advanced Simulation of Light Water Reactors, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis, which focuses on how to produce fuels from sunlight, water, and carbon dioxide. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@energy.gov, prior or during the live event. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each Hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Dr. Hank Holey is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Modeling and Simulation for Nuclear Reactors Hub, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis Hub, which focuses on how to produce biofuels from sunlight, water, and carbon dioxide.« less
Energy Innovation Hubs: A Home for Scientific Collaboration
Chu, Steven
2017-12-11
Secretary Chu will host a live, streaming Q&A session with the directors of the Energy Innovation Hubs on Tuesday, March 6, at 2:15 p.m. EST. The directors will be available for questions regarding their teams' work and the future of American energy. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@hq.doe.gov, prior or during the live event. Dr. Hank Foley is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Consortium for Advanced Simulation of Light Water Reactors, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis, which focuses on how to produce fuels from sunlight, water, and carbon dioxide. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@energy.gov, prior or during the live event. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each Hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Dr. Hank Holey is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Modeling and Simulation for Nuclear Reactors Hub, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis Hub, which focuses on how to produce biofuels from sunlight, water, and carbon dioxide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, P.; Martin, D.; Drugan, C.
2010-11-23
This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less
EPA Recognizes Excellence and Innovation in Clean Water Infrastructure
Today, the U.S. Environmental Protection Agency recognized 28 clean water infrastructure projects for excellence & innovation within the Clean Water State Revolving Fund (CWSRF) program. Honored projects include large wastewater infrastructure projects.
Hot Technology, Cool Science (LBNL Science at the Theater)
Fowler, John
2018-06-08
Great innovations start with bold ideas. Learn how Berkeley Lab scientists are devising practical solutions to everything from global warming to how you get to work. On May 11, 2009, five Berkeley Lab scientists participated in a roundtable dicussion moderated by KTVU's John Fowler on their leading-edge research. This "Science at the Theater" event, held at the Berkeley Repertory Theatre, featured technologies such as cool roofs, battery-driven transportation, a pocket-sized DNA probe, green supercomputing, and a noncontact method for restoring damaged and fragile mechanical recordings.
Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data
NASA Astrophysics Data System (ADS)
Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.
2018-03-01
One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.
Optimization of Supercomputer Use on EADS II System
NASA Technical Reports Server (NTRS)
Ahmed, Ardsher
1998-01-01
The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.
Supercomputer applications in molecular modeling.
Gund, T M
1988-01-01
An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.
The role of graphics super-workstations in a supercomputing environment
NASA Technical Reports Server (NTRS)
Levin, E.
1989-01-01
A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...
Data-intensive computing on numerically-insensitive supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, James P; Fasel, Patricia K; Habib, Salman
2010-12-03
With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.
Computer Electromagnetics and Supercomputer Architecture
NASA Technical Reports Server (NTRS)
Cwik, Tom
1993-01-01
The dramatic increase in performance over the last decade for microporcessor computations is compared with that for the supercomputer computations. This performance, the projected performance, and a number of other issues such as cost and the inherent pysical limitations in curent supercomputer technology have naturally led to parallel supercomputers and ensemble of interconnected microprocessors.
NREL Technologies Win National Awards
Development Magazine. The annual awards recognize the years 100 most important, unique and useful innovations . The magazine recognized PV Optics as one of the most important technological advances of 1997. PV these innovations reflects the breadth of resources that the labs are using to solve practical problems
Edison - A New Cray Supercomputer Advances Discovery at NERSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy
2014-02-06
When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.
Edison - A New Cray Supercomputer Advances Discovery at NERSC
Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie
2018-01-16
When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.
Progress in a novel architecture for high performance processing
NASA Astrophysics Data System (ADS)
Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin
2018-04-01
The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).
Create full-scale predictive economic models on ROI and innovation with performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, Earl C.; Conway, Steve
The U.S. Department of Energy (DOE), the world's largest buyer and user of supercomputers, awarded IDC Research, Inc. a grant to create two macroeconomic models capable of quantifying, respectively, financial and non-financial (innovation) returns on investments in HPC resources. Following a 2013 pilot study in which we created the models and tested them on about 200 real-world HPC cases, DOE authorized us to conduct a full-out, three-year grant study to collect and measure many more examples, a process that would also subject the methodology to further testing and validation. A secondary, "stretch" goal of the full-out study was to advancemore » the methodology from association toward (but not all the way to) causation, by eliminating the effects of some of the other factors that might be contributing, along with HPC investments, to the returns produced in the investigated projects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boris, J.P.; Picone, J.M.; Lambrakos, S.G.
The Surveillance, Correlation, and Tracking (SCAT) problem is the computation-limited kernel of future battle-management systems currently being developed, for example, under the Strategic Defense Initiative (SDI). This report shows how high-performance SCAT can be performed in this decade. Estimates suggest that an increase by a factor of at least one thousand in computational capacity will be necessary to track 10/sup 5/ SDI objects in real time. This large improvement is needed because standard algorithms for data organization in important segments of the SCAT problem scale as N/sup 2/ and N/sup 3/, where N is the number of perceived objects. Itmore » is shown that the required speed-up factor can now be achieved because of two new developments: 1) a heterogeneous element supercomputer system based on available parallel-processing technology can account for over one order of magnitude performance improvement today over existing supercomputers; and 2) algorithmic innovations development recently by the NRL Laboratory for Computational Physics will account for another two orders of magnitude improvement. Based on these advances, a comprehensive, high-performance kernel for a simulator/system to perform the SCAT portion of SDI battle management is described.« less
Hu, Hao; Hong, Xingchen; Terstriep, Jeff; Liu, Yan; Finn, Michael P.; Rush, Johnathan; Wendel, Jeffrey; Wang, Shaowen
2016-01-01
Geospatial data, often embedded with geographic references, are important to many application and science domains, and represent a major type of big data. The increased volume and diversity of geospatial data have caused serious usability issues for researchers in various scientific domains, which call for innovative cyberGIS solutions. To address these issues, this paper describes a cyberGIS community data service framework to facilitate geospatial big data access, processing, and sharing based on a hybrid supercomputer architecture. Through the collaboration between the CyberGIS Center at the University of Illinois at Urbana-Champaign (UIUC) and the U.S. Geological Survey (USGS), a community data service for accessing, customizing, and sharing digital elevation model (DEM) and its derived datasets from the 10-meter national elevation dataset, namely TopoLens, is created to demonstrate the workflow integration of geospatial big data sources, computation, analysis needed for customizing the original dataset for end user needs, and a friendly online user environment. TopoLens provides online access to precomputed and on-demand computed high-resolution elevation data by exploiting the ROGER supercomputer. The usability of this prototype service has been acknowledged in community evaluation.
A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials
NASA Astrophysics Data System (ADS)
Matouš, Karel; Geers, Marc G. D.; Kouznetsova, Varvara G.; Gillman, Andrew
2017-02-01
Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platform in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.
A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matouš, Karel, E-mail: kmatous@nd.edu; Geers, Marc G.D.; Kouznetsova, Varvara G.
2017-02-01
Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platformmore » in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazi, A U
2007-02-06
Setting performance goals is part of the business plan for almost every company. The same is true in the world of supercomputers. Ten years ago, the Department of Energy (DOE) launched the Accelerated Strategic Computing Initiative (ASCI) to help ensure the safety and reliability of the nation's nuclear weapons stockpile without nuclear testing. ASCI, which is now called the Advanced Simulation and Computing (ASC) Program and is managed by DOE's National Nuclear Security Administration (NNSA), set an initial 10-year goal to obtain computers that could process up to 100 trillion floating-point operations per second (teraflops). Many computer experts thought themore » goal was overly ambitious, but the program's results have proved them wrong. Last November, a Livermore-IBM team received the 2005 Gordon Bell Prize for achieving more than 100 teraflops while modeling the pressure-induced solidification of molten metal. The prestigious prize, which is named for a founding father of supercomputing, is awarded each year at the Supercomputing Conference to innovators who advance high-performance computing. Recipients for the 2005 prize included six Livermore scientists--physicists Fred Streitz, James Glosli, and Mehul Patel and computer scientists Bor Chan, Robert Yates, and Bronis de Supinski--as well as IBM researchers James Sexton and John Gunnels. This team produced the first atomic-scale model of metal solidification from the liquid phase with results that were independent of system size. The record-setting calculation used Livermore's domain decomposition molecular-dynamics (ddcMD) code running on BlueGene/L, a supercomputer developed by IBM in partnership with the ASC Program. BlueGene/L reached 280.6 teraflops on the Linpack benchmark, the industry standard used to measure computing speed. As a result, it ranks first on the list of Top500 Supercomputer Sites released in November 2005. To evaluate the performance of nuclear weapons systems, scientists must understand how materials behave under extreme conditions. Because experiments at high pressures and temperatures are often difficult or impossible to conduct, scientists rely on computer models that have been validated with obtainable data. Of particular interest to weapons scientists is the solidification of metals. ''To predict the performance of aging nuclear weapons, we need detailed information on a material's phase transitions'', says Streitz, who leads the Livermore-IBM team. For example, scientists want to know what happens to a metal as it changes from molten liquid to a solid and how that transition affects the material's characteristics, such as its strength.« less
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
48 CFR 225.7012 - Restriction on supercomputers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...
The Presidential Innovation Award for Environmental ...
The Presidential Innovation Award for Environmental Educators recognizes outstanding kindergarten through grade 12 teachers who employ innovative approaches to environmental education and use the environment as a context for learning for their students.
Automatic discovery of the communication network topology for building a supercomputer model
NASA Astrophysics Data System (ADS)
Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim
2016-10-01
The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.
TOP500 Supercomputers for June 2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
2004-06-23
23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.
Automotive applications of superconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginsberg, M.
1987-01-01
These proceedings compile papers on supercomputers in the automobile industry. Titles include: An automotive engineer's guide to the effective use of scalar, vector, and parallel computers; fluid mechanics, finite elements, and supercomputers; and Automotive crashworthiness performance on a supercomputer.
Improved Access to Supercomputers Boosts Chemical Applications.
ERIC Educational Resources Information Center
Borman, Stu
1989-01-01
Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)
Scientific Visualization in High Speed Network Environments
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kutler, Paul (Technical Monitor)
1997-01-01
In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.
Towards Efficient Supercomputing: Searching for the Right Efficiency Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W
2012-01-01
The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has beenmore » and how it stands today with respect to efficient supercomputing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, William; Weber, Marta S.; Farber, Robert M.
Social Media provide an exciting and novel view into social phenomena. The vast amounts of data that can be gathered from the Internet coupled with massively parallel supercomputers such as the Cray XMT open new vistas for research. Conclusions drawn from such analysis must recognize that social media are distinct from the underlying social reality. Rigorous validation is essential. This paper briefly presents results obtained from computational analysis of social media - utilizing both blog and twitter data. Validation of these results is discussed in the context of a framework of established methodologies from the social sciences. Finally, an outlinemore » for a set of supporting studies is proposed.« less
NASA's supercomputing experience
NASA Technical Reports Server (NTRS)
Bailey, F. Ron
1990-01-01
A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.
OpenMP Performance on the Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Haoqiang, Jin; Hood, Robert
2005-01-01
This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.
Climate@Home: Crowdsourcing Climate Change Research
NASA Astrophysics Data System (ADS)
Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.
2011-12-01
Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.
Innovation Management and Performance Framework for Research University in Malaysia
ERIC Educational Resources Information Center
Kowang, Tan Owee; Long, Choi Sang; Rasli, Amran
2015-01-01
Institutions of Higher Learning (IHL) in Malaysia are recognized as the core of new innovation development. This paper empirically studies one of IHLs in Malaysia with the objectives to gauge the perceived important level of success factors for innovation management, and to examine the relationship between innovation management success factors…
Supercomputer networking for space science applications
NASA Technical Reports Server (NTRS)
Edelson, B. I.
1992-01-01
The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.
SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION PROGRAM: PROGRESS AND ACCOMPLISHMENTS - FISCAL YEAR 1991
The Superfund Innovative Technology Evaluation (SITE) program was the first major program for demonstrating and evaluating full-scale innovative treatment technologies at hazardous waste sites. Having concluded its fifth year, the SITE program is recognized as a leading advocate ...
Most Social Scientists Shun Free Use of Supercomputers.
ERIC Educational Resources Information Center
Kiernan, Vincent
1998-01-01
Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…
A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery
NASA Technical Reports Server (NTRS)
Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.
2000-01-01
The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.
TOP500 Supercomputers for November 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
2003-11-16
22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.
EPA promotes environmental stewardship by recognizing innovators in schools, communities and businesses in categories such as environmental education, green chemistry, smart growth, green power, and reducing air pollution and climate change impacts.
Innovation in clinical pharmacy practice and opportunities for academic--practice partnership.
Gubbins, Paul O; Micek, Scott T; Badowski, Melissa; Cheng, Judy; Gallagher, Jason; Johnson, Samuel G; Karnes, Jason H; Lyons, Kayley; Moore, Katherine G; Strnad, Kyle
2014-05-01
Clinical pharmacy has a rich history of advancing practice through innovation. These innovations helped to mold clinical pharmacy into a patient-centered discipline recognized for its contributions to improving medication therapy outcomes. However, innovations in clinical pharmacy practice have now waned. In our view, the growth of academic–practice partnerships could reverse this trend and stimulate innovation among the next generation of pioneering clinical pharmacists. Although collaboration facilitates innovation,academic institutions and health care systems/organizations are not taking full advantage of this opportunity. The academic–practice partnership can be optimized by making both partners accountable for the desired outcomes of their collaboration, fostering symbiotic relationships that promote value-added clinical pharmacy services and emphasizing continuous quality improvement in the delivery of these services. Optimizing academic–practice collaboration on a broader scale requires both partners to adopt a culture that provides for dedicated time to pursue innovation, establishes mechanisms to incubate ideas, recognizes where motivation and vision align, and supports the purpose of the partnership. With appropriate leadership and support, a shift in current professional education and training practices, and a commitment to cultivate future innovators, the academic–practice partnership can develop new and innovative practice advancements that will improve patient outcomes.
Illinois Innovation Talent Project: Implications for Two-Year Institutions
ERIC Educational Resources Information Center
Tyszko, Jason A.; Sheets, Robert G.
2012-01-01
There is a growing consensus that the United States and its regions, including the Midwest region, will increasingly compete on innovation. This also is widely recognized in the business world. There is also growing consensus that innovation talent--the human talent to drive and support innovation--will be a major key. Despite this consensus,…
Technology and Innovation in Adult Learning
ERIC Educational Resources Information Center
King, Kathy P.
2017-01-01
"Technology and Innovation in Adult Learning" introduces educators and students to the intersection of adult learning and the growing technological revolution. Written by an internationally recognized expert in the field, this book explores the theory, research, and practice driving innovation in both adult learning and learning…
ERIC Educational Resources Information Center
Lloyd, Meg; Raths, David; Namahoe, Kanoe
2012-01-01
In this article, the authors present the 2012 Campus Technology Innovators. These IT leaders have deployed extraordinary technology solutions to meet campus challenges. The authors also recognize the vendors and products involved in making these innovative projects a success. The 10 winners are: (1) University of Arizona (Student Systems and…
Distributed user services for supercomputers
NASA Technical Reports Server (NTRS)
Sowizral, Henry A.
1989-01-01
User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2000-01-01
The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, A.
1986-03-10
Supercomputing software is moving into high gear, spurred by the rapid spread of supercomputers into new applications. The critical challenge is how to develop tools that will make it easier for programmers to write applications that take advantage of vectorizing in the classical supercomputer and the parallelism that is emerging in supercomputers and minisupercomputers. Writing parallel software is a challenge that every programmer must face because parallel architectures are springing up across the range of computing. Cray is developing a host of tools for programmers. Tools to support multitasking (in supercomputer parlance, multitasking means dividing up a single program tomore » run on multiple processors) are high on Cray's agenda. On tap for multitasking is Premult, dubbed a microtasking tool. As a preprocessor for Cray's CFT77 FORTRAN compiler, Premult will provide fine-grain multitasking.« less
Will Moores law be sufficient?
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik P.
2004-07-01
It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moore's Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPSmore » (1 Zettaflops) . The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moore's Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moore's Law will be very high by today's standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops), depending on architecture, but distinctly below the level required for the most ambitious applications. Having established that Moore's Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moore's Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a 'multi-architecture' for supercomputers that combines a microprocessor with other 'advanced' concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the 'advanced' features would provide a boost to the limits of performance.« less
Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilk, Todd
The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.
Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL
Yilk, Todd
2018-02-17
The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.
Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubois, David H; Dubois, Andrew J; Boorman, Thomas M
2009-01-01
This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.
Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubois, David H; Dubois, Andrew J; Boorman, Thomas M
2009-03-10
This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.
Can We Recognize an Innovation? Perspective from an Evolving Network Model
NASA Astrophysics Data System (ADS)
Jain, Sanjay; Krishna, Sandeep
"Innovations" are central to the evolution of societies and the evolution of life. But what constitutes an innovation? We can often agree after the event, when its consequences and impact over a long term are known, whether something was an innovation, and whether it was a "big" innovation or a "minor" one. But can we recognize an innovation "on the fly" as it appears? Successful entrepreneurs often can. Is it possible to formalize that intuition? We discuss this question in the setting of a mathematical model of evolving networks. The model exhibits self-organization , growth, stasis, and collapse of a complex system with many interacting components, reminiscent of real-world phenomena. A notion of "innovation" is formulated in terms of graph-theoretic constructs and other dynamical variables of the model. A new node in the graph gives rise to an innovation, provided it links up "appropriately" with existing nodes; in this view innovation necessarily depends upon the existing context. We show that innovations, as defined by us, play a major role in the birth, growth, and destruction of organizational structures. Furthermore, innovations can be categorized in terms of their graph-theoretic structure as they appear. Different structural classes of innovation have potentially different qualitative consequences for the future evolution of the system, some minor and some major. Possible general lessons from this specific model are briefly discussed.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Information Management and Technology Div.
This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…
Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-02-01
Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.
Compilation of Abstracts for SC12 Conference Proceedings
NASA Technical Reports Server (NTRS)
Morello, Gina Francine (Compiler)
2012-01-01
1 A Breakthrough in Rotorcraft Prediction Accuracy Using Detached Eddy Simulation; 2 Adjoint-Based Design for Complex Aerospace Configurations; 3 Simulating Hypersonic Turbulent Combustion for Future Aircraft; 4 From a Roar to a Whisper: Making Modern Aircraft Quieter; 5 Modeling of Extended Formation Flight on High-Performance Computers; 6 Supersonic Retropropulsion for Mars Entry; 7 Validating Water Spray Simulation Models for the SLS Launch Environment; 8 Simulating Moving Valves for Space Launch System Liquid Engines; 9 Innovative Simulations for Modeling the SLS Solid Rocket Booster Ignition; 10 Solid Rocket Booster Ignition Overpressure Simulations for the Space Launch System; 11 CFD Simulations to Support the Next Generation of Launch Pads; 12 Modeling and Simulation Support for NASA's Next-Generation Space Launch System; 13 Simulating Planetary Entry Environments for Space Exploration Vehicles; 14 NASA Center for Climate Simulation Highlights; 15 Ultrascale Climate Data Visualization and Analysis; 16 NASA Climate Simulations and Observations for the IPCC and Beyond; 17 Next-Generation Climate Data Services: MERRA Analytics; 18 Recent Advances in High-Resolution Global Atmospheric Modeling; 19 Causes and Consequences of Turbulence in the Earths Protective Shield; 20 NASA Earth Exchange (NEX): A Collaborative Supercomputing Platform; 21 Powering Deep Space Missions: Thermoelectric Properties of Complex Materials; 22 Meeting NASA's High-End Computing Goals Through Innovation; 23 Continuous Enhancements to the Pleiades Supercomputer for Maximum Uptime; 24 Live Demonstrations of 100-Gbps File Transfers Across LANs and WANs; 25 Untangling the Computing Landscape for Climate Simulations; 26 Simulating Galaxies and the Universe; 27 The Mysterious Origin of Stellar Masses; 28 Hot-Plasma Geysers on the Sun; 29 Turbulent Life of Kepler Stars; 30 Modeling Weather on the Sun; 31 Weather on Mars: The Meteorology of Gale Crater; 32 Enhancing Performance of NASAs High-End Computing Applications; 33 Designing Curiosity's Perfect Landing on Mars; 34 The Search Continues: Kepler's Quest for Habitable Earth-Sized Planets.
GREEN SUPERCOMPUTING IN A DESKTOP BOX
DOE Office of Scientific and Technical Information (OSTI.GOV)
HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY
2007-01-17
The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less
A Study of Educational Knowledge Diffusion and Utilization.
ERIC Educational Resources Information Center
Wolf, W. C., Jr.; Fiorino, A. John
Some six hundred educators were studied in depth to determine their experiences with innovation, the influences of recognized diffusion agents upon their adoption of innovations, the characteristics of selected target audiences in relation to the adoption of innovations to personal practice, and relationships between five distinguishable stages of…
The SITE Program was the first major program for demonstrating and evaluating fullscale innovative treatment technologies at hazardous waste sites. Having concluded its fourth year, the SITE Program is recognized as a leading advocate of innovative technology development and comm...
Effective New Product Ideation: IDEATRIZ Methodology
NASA Astrophysics Data System (ADS)
de Carvalho, Marco A.
It is widely recognized that innovation is an activity of strategic importance. However, organizations seeking to be innovative face many dilemmas. Perhaps the main one is that, though it is necessary to innovate, innovation is a highly risky activity. In this paper, we explore the origin of product innovation, which is new product ideation. We discuss new product ideation approaches and their effectiveness and provide a description of an effective new product ideation methodology.
Input/output behavior of supercomputing applications
NASA Technical Reports Server (NTRS)
Miller, Ethan L.
1991-01-01
The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.
Understanding Antegrade Colonic Enema (ACE) Surgery
... recognized leader in digestive diagnosis, treatments and surgical innovations. Cleveland Clinic is a non-profit academic medical ... Safety Office of Diversity & Inclusion Patient Experience Research & Innovations Government & Community Relations Careers For Employees Resources for ...
A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.
2016-12-01
Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.
ERIC Educational Resources Information Center
Crosling, Glenda; Nair, Mahendhiran; Vaithilingam, Santha
2015-01-01
Globally, governments recognize the importance of creativity and innovation for sustainable socioeconomic development, and many invest resources to develop learning environments that foster these capacities. This paper provides a systematic framework based on Nair's "Innovation Helix" model for studying the factors of a country's…
Prospects for Boiling of Subcooled Dielectric Liquids for Supercomputer Cooling
NASA Astrophysics Data System (ADS)
Zeigarnik, Yu. A.; Vasil'ev, N. V.; Druzhinin, E. A.; Kalmykov, I. V.; Kosoi, A. S.; Khodakov, K. A.
2018-02-01
It is shown experimentally that using forced-convection boiling of dielectric coolants of the Novec 649 Refrigerant subcooled relative to the saturation temperature makes possible removing heat flow rates up to 100 W/cm2 from modern supercomputer chip interface. This fact creates prerequisites for the application of dielectric liquids in cooling systems of modern supercomputers with increased requirements for their operating reliability.
Importance of databases of nucleic acids for bioinformatic analysis focused to genomics
NASA Astrophysics Data System (ADS)
Jimenez-Gutierrez, L. R.; Barrios-Hernández, C. J.; Pedraza-Ferreira, G. R.; Vera-Cala, L.; Martinez-Perez, F.
2016-08-01
Recently, bioinformatics has become a new field of science, indispensable in the analysis of millions of nucleic acids sequences, which are currently deposited in international databases (public or private); these databases contain information of genes, RNA, ORF, proteins, intergenic regions, including entire genomes from some species. The analysis of this information requires computer programs; which were renewed in the use of new mathematical methods, and the introduction of the use of artificial intelligence. In addition to the constant creation of supercomputing units trained to withstand the heavy workload of sequence analysis. However, it is still necessary the innovation on platforms that allow genomic analyses, faster and more effectively, with a technological understanding of all biological processes.
Information technologies for astrophysics circa 2001
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.
National Test Facility civilian agency use of supercomputers not feasible
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-12-01
Based on interviews with civilian agencies cited in the House report (DOE, DoEd, HHS, FEMA, NOAA), none would be able to make effective use of NTF`s excess supercomputing capabilities. These agencies stated they could not use the resources primarily because (1) NTF`s supercomputers are older machines whose performance and costs cannot match those of more advanced computers available from other sources and (2) some agencies have not yet developed applications requiring supercomputer capabilities or do not have funding to support such activities. In addition, future support for the hardware and software at NTF is uncertain, making any investment by anmore » outside user risky.« less
Kriging for Spatial-Temporal Data on the Bridges Supercomputer
NASA Astrophysics Data System (ADS)
Hodgess, E. M.
2017-12-01
Currently, kriging of spatial-temporal data is slow and limited to relatively small vector sizes. We have developed a method on the Bridges supercomputer, at the Pittsburgh supercomputer center, which uses a combination of the tools R, Fortran, the Message Passage Interface (MPI), OpenACC, and special R packages for big data. This combination of tools now permits us to complete tasks which could previously not be completed, or takes literally hours to complete. We ran simulation studies from a laptop against the supercomputer. We also look at "real world" data sets, such as the Irish wind data, and some weather data. We compare the timings. We note that the timings are suprising good.
Multiple DNA and protein sequence alignment on a workstation and a supercomputer.
Tajima, K
1988-11-01
This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Gunasekaran, Raghul; Ma, Xiaosong
2016-01-01
Inter-application I/O contention and performance interference have been recognized as severe problems. In this work, we demonstrate, through measurement from Titan (world s No. 3 supercomputer), that high I/O variance co-exists with the fact that individual storage units remain under-utilized for the majority of the time. This motivates us to propose AID, a system that performs automatic application I/O characterization and I/O-aware job scheduling. AID analyzes existing I/O traffic and batch job history logs, without any prior knowledge on applications or user/developer involvement. It identifies the small set of I/O-intensive candidates among all applications running on a supercomputer and subsequentlymore » mines their I/O patterns, using more detailed per-I/O-node traffic logs. Based on such auto- extracted information, AID provides online I/O-aware scheduling recommendations to steer I/O-intensive applications away from heavy ongoing I/O activities. We evaluate AID on Titan, using both real applications (with extracted I/O patterns validated by contacting users) and our own pseudo-applications. Our results confirm that AID is able to (1) identify I/O-intensive applications and their detailed I/O characteristics, and (2) significantly reduce these applications I/O performance degradation/variance by jointly evaluating out- standing applications I/O pattern and real-time system l/O load.« less
The Spark of Disruptive Innovation for Space Physics and Aeronomy
NASA Astrophysics Data System (ADS)
MacDonald, E.
2017-12-01
What is disruptive innovation and why does it matter for Space Physics and Aeronomy (SPA)? This presentation will define disruptive innovation and present several examples relevant to SPA. These examples range from Cubesats to Citizen Science. Disruptive innovation requires not just an idea but also execution. Why do we need disruptive innovation? Simply put, we need to break out of our comfortable rut to solve bigger problems and evolve as a field for the future. These opportunities are exciting and they are difficult. SPA is well-suited to these types of interdisciplinary applications, due to its dual fundamental and applied nature that dovetails with many other fields. Challenges are that we do not incentivize disruptive innovation, we do not recognize it, and we typically do not fund it. As a result we are risk averse and we suffer from the "Matthew effect" of accumulated advantage. We do not allow ourselves to learn from new and uncomfortable angles and recognize the innovation that comes from there. The strength of having a more diverse and inclusive field is that a range of more diverse ideas and perspectives will be promoted. The next big innovations for SPA may come from the outside, and the best way to capture such ideas may be to promote diversity and inclusion at all levels.
NASA Technical Reports Server (NTRS)
Kutler, Paul; Yee, Helen
1987-01-01
Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
NAS technical summaries: Numerical aerodynamic simulation program, March 1991 - February 1992
NASA Technical Reports Server (NTRS)
1992-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefiting other supercomputer centers in Government and industry. This report contains selected scientific results from the 1991-92 NAS Operational Year, March 4, 1991 to March 3, 1992, which is the fifth year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP. The Cray-2, the first generation supercomputer, has four processors, 256 megawords of central memory, and a total sustained speed of 250 million floating point operations per second. The Cray Y-MP, the second generation supercomputer, has eight processors and a total sustained speed of one billion floating point operations per second. Additional memory was installed this year, doubling capacity from 128 to 256 megawords of solid-state storage-device memory. Because of its higher performance, the Cray Y-MP delivered approximately 77 percent of the total number of supercomputer hours used during this year.
NASA Technical Reports Server (NTRS)
1991-01-01
Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)
Desktop supercomputer: what can it do?
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Degtyarev, A.; Korkhov, V.
2017-12-01
The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.
Effective and Innovative Practices for Stronger Facilities Management.
ERIC Educational Resources Information Center
Banick, Sarah
2002-01-01
Describes the five winners of the APPA's Effective & Innovative Practices Award. These facilities management programs and processes were recognized for enhancing service delivery, lowering costs, increasing productivity, improving customer service, generating revenue, or otherwise benefiting the educational institution. (EV)
NASA Astrophysics Data System (ADS)
Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.
2010-12-01
In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
Automated Help System For A Supercomputer
NASA Technical Reports Server (NTRS)
Callas, George P.; Schulbach, Catherine H.; Younkin, Michael
1994-01-01
Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.
ERIC Educational Resources Information Center
Spyrtou, Anna; Lavonen, Jari; Zoupidis, Anastasios; Loukomies, Anni; Pnevmatikos, Dimitris; Juuti, Kalle; Kariotoglou, Petros
2018-01-01
In the present paper, we report on the idea of exchanging educational innovations across European countries aiming to shed light on the following question: how feasible and useful is it to transfer an innovation across different national educational settings? The innovation, in this case, Inquiry-Based Teaching Learning Sequences, is recognized as…
A Reasoning And Hypothesis-Generation Framework Based On Scalable Graph Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas Rangan
Finding actionable insights from data has always been difficult. As the scale and forms of data increase tremendously, the task of finding value becomes even more challenging. Data scientists at Oak Ridge National Laboratory are leveraging unique leadership infrastructure (e.g. Urika-XA and Urika-GD appliances) to develop scalable algorithms for semantic, logical and statistical reasoning with unstructured Big Data. We present the deployment of such a framework called ORiGAMI (Oak Ridge Graph Analytics for Medical Innovations) on the National Library of Medicine s SEMANTIC Medline (archive of medical knowledge since 1994). Medline contains over 70 million knowledge nuggets published in 23.5more » million papers in medical literature with thousands more added daily. ORiGAMI is available as an open-science medical hypothesis generation tool - both as a web-service and an application programming interface (API) at http://hypothesis.ornl.gov . Since becoming an online service, ORIGAMI has enabled clinical subject-matter experts to: (i) discover the relationship between beta-blocker treatment and diabetic retinopathy; (ii) hypothesize that xylene is an environmental cancer-causing carcinogen and (iii) aid doctors with diagnosis of challenging cases when rare diseases manifest with common symptoms. In 2015, ORiGAMI was featured in the Historical Clinical Pathological Conference in Baltimore as a demonstration of artificial intelligence to medicine, IEEE/ACM Supercomputing and recognized as a Centennial Showcase Exhibit at the Radiological Society of North America (RSNA) Conference in Chicago. The final paper will describe the workflow built for the Cray Urika-XA and Urika-GD appliances that is able to reason with the knowledge of every published medical paper every time a clinical researcher uses the tool.« less
NASA Astrophysics Data System (ADS)
Day, B. H.; Bland, P.
2016-12-01
Fireballs in the Sky is an innovative Australian citizen science program that connects the public with the research of the Desert Fireball Network (DFN). This research aims to understand the early workings of the solar system, and Fireballs in the Sky invites people around the world to learn about this science, contributing fireball sightings via a user-friendly app. To date, more than 23,000 people have downloaded the app world-wide and participated in planetary science. The Fireballs in the Sky app allows users to get involved with the Desert Fireball Network research, supplementing DFN observations and providing enhanced coverage by reporting their own meteor sightings to DFN scientists. Fireballs in the Sky reports are used to track the trajectories of meteors - from their orbit in space to where they might have landed on Earth. Led by Phil Bland at Curtin University in Australia, the Desert Fireball Network (DFN) uses automated observatories across Australia to triangulate trajectories of meteorites entering the atmosphere, determine pre-entry orbits, and pinpoint their fall positions. Each observatory is an autonomous intelligent imaging system, taking 1000×36Megapixel all-sky images throughout the night, using neural network algorithms to recognize events. They are capable of operating for 12 months in a harsh environment, and store all imagery collected. We developed a completely automated software pipeline for data reduction, and built a supercomputer database for storage, allowing us to process our entire archive. The DFN currently stands at 50 stations distributed across the Australian continent, covering an area of 2.5 million km^2. Working with DFN's partners at NASA's Solar System Exploration Research Virtual Institute, the team is expanding the network beyond Australia to locations around the world. Fireballs in the Sky allows a growing public base to learn about and participate in this exciting research.
NASA Advanced Supercomputing (NAS) User Services Group
NASA Technical Reports Server (NTRS)
Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)
2002-01-01
This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.
Building a culture for innovation: a leadership challenge.
Maher, Lynne
2014-01-01
It is recognized that health services are facing increasing cost pressures amid a climate of increasing demand and increasing expectations from patients and families. The ability to innovate is important for the future success of all health care organizations. By malting some simple but profound changes in behaviours and processes as illustrated across seven dimensions, leaders can have great impact on the culture for innovation. This in turn can support the transformation of health services through increased innovation.
NSF Commits to Supercomputers.
ERIC Educational Resources Information Center
Waldrop, M. Mitchell
1985-01-01
The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)
Scaling up ATLAS Event Service to production levels on opportunistic computing platforms
NASA Astrophysics Data System (ADS)
Benjamin, D.; Caballero, J.; Ernst, M.; Guan, W.; Hover, J.; Lesny, D.; Maeno, T.; Nilsson, P.; Tsulaia, V.; van Gemmeren, P.; Vaniachine, A.; Wang, F.; Wenaus, T.; ATLAS Collaboration
2016-10-01
Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.
Information technologies for astrophysics circa 2001
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1991-01-01
It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include miniaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is less easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large data sets. Three limiting paradigms are as follows: saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage, and retrieval off the shelf; and the linear model of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.
Final Report for DOE Award ER25756
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesselman, Carl
2014-11-17
The SciDAC-funded Center for Enabling Distributed Petascale Science (CEDPS) was established to address technical challenges that arise due to the frequent geographic distribution of data producers (in particular, supercomputers and scientific instruments) and data consumers (people and computers) within the DOE laboratory system. Its goal is to produce technical innovations that meet DOE end-user needs for (a) rapid and dependable placement of large quantities of data within a distributed high-performance environment, and (b) the convenient construction of scalable science services that provide for the reliable and high-performance processing of computation and data analysis requests from many remote clients. The Centermore » is also addressing (c) the important problem of troubleshooting these and other related ultra-high-performance distributed activities from the perspective of both performance and functionality« less
77 FR 10725 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-23
... Management and Budget (OMB) for clearance the following proposal for collection of information under the... (USPTO). Title: National Medal of Technology and Innovation Nomination Application. Form Number(s): None... Innovation Nomination Application to recognize through nomination an individual's or company's extraordinary...
Palazzeschi, Letizia; Bucci, Ornella; Di Fabio, Annamaria
2018-01-01
In organizations, innovation is considered a relevant aspect of success and long-term survival. Organizations recognize that innovation contributes to creating competitive advantages in a more competitive, challenging and changing labor market. The present contribution addresses innovation in organizations in the scenario of Industry 4.0, including technological innovation and psychological innovation. Innovation is a core concept in this framework to face the challenge of globalized and fluid labor market in the 21st century. Reviewing the definition of innovation, the article focuses on innovative work behaviors and the relative measures. This perspective article also suggests new directions in a primary prevention perspective for future research and intervention relative to innovation and innovative work behaviors in the organizational context.
Palazzeschi, Letizia; Bucci, Ornella; Di Fabio, Annamaria
2018-01-01
In organizations, innovation is considered a relevant aspect of success and long-term survival. Organizations recognize that innovation contributes to creating competitive advantages in a more competitive, challenging and changing labor market. The present contribution addresses innovation in organizations in the scenario of Industry 4.0, including technological innovation and psychological innovation. Innovation is a core concept in this framework to face the challenge of globalized and fluid labor market in the 21st century. Reviewing the definition of innovation, the article focuses on innovative work behaviors and the relative measures. This perspective article also suggests new directions in a primary prevention perspective for future research and intervention relative to innovation and innovative work behaviors in the organizational context. PMID:29445349
Mira: Argonne's 10-petaflops supercomputer
Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul
2018-02-13
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Adventures in Computational Grids
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Sometimes one supercomputer is not enough. Or your local supercomputers are busy, or not configured for your job. Or you don't have any supercomputers. You might be trying to simulate worldwide weather changes in real time, requiring more compute power than you could get from any one machine. Or you might be collecting microbiological samples on an island, and need to examine them with a special microscope located on the other side of the continent. These are the times when you need a computational grid.
Mira: Argonne's 10-petaflops supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papka, Michael; Coghlan, Susan; Isaacs, Eric
2013-07-03
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Breakthrough: NETL's Simulation-Based Engineering User Center (SBEUC)
Guenther, Chris
2018-05-23
The National Energy Technology Laboratory relies on supercomputers to develop many novel ideas that become tomorrow's energy solutions. Supercomputers provide a cost-effective, efficient platform for research and usher technologies into widespread use faster to bring benefits to the nation. In 2013, Secretary of Energy Dr. Ernest Moniz dedicated NETL's new supercomputer, the Simulation Based Engineering User Center, or SBEUC. The SBEUC is dedicated to fossil energy research and is a collaborative tool for all of NETL and our regional university partners.
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
Technology advances and market forces: Their impact on high performance architectures
NASA Technical Reports Server (NTRS)
Best, D. R.
1978-01-01
Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.
Floating point arithmetic in future supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.
1989-01-01
Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.
Breakthrough: NETL's Simulation-Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guenther, Chris
The National Energy Technology Laboratory relies on supercomputers to develop many novel ideas that become tomorrow's energy solutions. Supercomputers provide a cost-effective, efficient platform for research and usher technologies into widespread use faster to bring benefits to the nation. In 2013, Secretary of Energy Dr. Ernest Moniz dedicated NETL's new supercomputer, the Simulation Based Engineering User Center, or SBEUC. The SBEUC is dedicated to fossil energy research and is a collaborative tool for all of NETL and our regional university partners.
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Tracing Scientific Facilities through the Research Literature Using Persistent Identifiers
NASA Astrophysics Data System (ADS)
Mayernik, M. S.; Maull, K. E.
2016-12-01
Tracing persistent identifiers to their source publications is an easy task when authors use them, since it is a simple matter of matching the persistent identifier to the specific text string of the identifier. However, trying to understand if a publication uses the resource behind an identifier when such identifier is not referenced explicitly is a harder task. In this research, we explore the effectiveness of alternative strategies of associating publications with uses of the resource referenced by an identifier when it may not be explicit. This project is explored within the context of the NCAR supercomputer, where we are broadly interesting in the science that can be traced to the usage of the NCAR supercomputing facility, by way of the peer-reviewed research publications that utilize and reference it. In this project we explore several ways of drawing linkages between publications and the NCAR supercomputing resources. Identifying and compiling peer-reviewed publications related to NCAR supercomputer usage are explored via three sources: 1) User-supplied publications gathered through a community survey, 2) publications that were identified via manual searching of the Google scholar search index, and 3) publications associated with National Science Foundation (NSF) grants extracted from a public NSF database. These three sources represent three styles of collecting information about publications that likely imply usage of the NCAR supercomputing facilities. Each source has strengths and weaknesses, thus our discussion will explore how our publication identification and analysis methods vary in terms of accuracy, reliability, and effort. We will also discuss strategies for enabling more efficient tracing of research impacts of supercomputing facilities going forward through the assignment of a persistent web identifier to the NCAR supercomputer. While this solution has potential to greatly enhance our ability to trace the use of the facility through publications, authors must cite the facility consistently. It is therefore necessary to provide recommendations for citation and attribution behavior, and we will conclude our discussion with how such recommendations have improved tracing the supercomputer facility allowing for more consistent and widespread measurement of its impact.
The Pawsey Supercomputer geothermal cooling project
NASA Astrophysics Data System (ADS)
Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.
2010-12-01
The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air-conditioning systems from the direct use of geothermal power from Hot Sedimentary Aquifer (HSA) systems. HSA systems underlie many of the world's population centers, and thus have the potential to offset a significant fraction of the world's consumption of electrical power for air-conditioning.
Campus Technology Innovators Awards 2009
ERIC Educational Resources Information Center
Grush, Mary; Villano, Matt
2009-01-01
The annual Campus Technology Innovators awards recognize higher education institutions that take true initiative--even out-and-out risk--to better serve the campus community via technology. These top-notch university administrators, faculty, and staff demonstrate something more than a "job well done"; their vision and leadership have…
48 CFR 2115.404-71 - Profit analysis factors.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... 2115.404-71 Section 2115.404-71 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT... weight. Innovations of benefit to the FEGLI Program will generally receive a plus weight; documented..., etc., having viability to the Program at large. Improvements and innovations recognized and rewarded...
2008 Campus Technology Innovators
ERIC Educational Resources Information Center
Campus Technology, 2008
2008-01-01
This article features the 14 winners of the 2008 Campus Technology Innovators. This article offers an insider's view of the winners' campus technology initiatives, their project leads, and vendor partners jointly recognized for a unique ability to advance teaching, learning, administration, and operation on North American college and university…
2011 Computation Directorate Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2012-04-11
From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less
Energy Efficient Supercomputing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anypas, Katie
2014-10-17
Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.
Energy Efficient Supercomputing
Anypas, Katie
2018-05-07
Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.
Job Management Requirements for NAS Parallel Systems and Clusters
NASA Technical Reports Server (NTRS)
Saphir, William; Tanner, Leigh Ann; Traversat, Bernard
1995-01-01
A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.
40 CFR 105.15 - How are award winners recognized?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 23 2012-07-01 2012-07-01 false How are award winners recognized? 105.15 Section 105.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... ceremony as recognition for an outstanding technological achievement or an innovative process, method or...
40 CFR 105.15 - How are award winners recognized?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false How are award winners recognized? 105.15 Section 105.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... ceremony as recognition for an outstanding technological achievement or an innovative process, method or...
40 CFR 105.15 - How are award winners recognized?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 22 2014-07-01 2013-07-01 true How are award winners recognized? 105.15 Section 105.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... ceremony as recognition for an outstanding technological achievement or an innovative process, method or...
40 CFR 105.15 - How are award winners recognized?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 23 2013-07-01 2013-07-01 false How are award winners recognized? 105.15 Section 105.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... ceremony as recognition for an outstanding technological achievement or an innovative process, method or...
40 CFR 105.15 - How are award winners recognized?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false How are award winners recognized? 105.15 Section 105.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... ceremony as recognition for an outstanding technological achievement or an innovative process, method or...
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.
The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.
A mass storage system for supercomputers based on Unix
NASA Technical Reports Server (NTRS)
Richards, J.; Kummell, T.; Zarlengo, D. G.
1988-01-01
The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.
Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.
Berger, S B; Reis, D J
1995-02-01
We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.
Intelligent supercomputers: the Japanese computer sputnik
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, G.
1983-11-01
Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.
Sen. Coons, Christopher A. [D-DE
2013-10-28
Senate - 10/28/2013 Submitted in the Senate, considered, and agreed to without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:
Introducing Mira, Argonne's Next-Generation Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-03-19
Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.
Green Supercomputing at Argonne
Pete Beckman
2017-12-09
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputingâeverything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.
TOP500 Supercomputers for June 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
2003-06-23
21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamosmore » National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.« less
Characterizing output bottlenecks in a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Bing; Chase, Jeffrey; Dillow, David A
2012-01-01
Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less
MOOCs as Change Agents to Boost Innovation in Higher Education Learning Arenas
ERIC Educational Resources Information Center
Ossiannilsson, Ebba; Altinay, Fahriye; Altinay, Zehra
2016-01-01
Massive open online courses (MOOCs) provide opportunities for learners to benefit from initiatives that are promoted by prestigious universities worldwide. The introduction of MOOCs in 2008 has since then transformed education globally. Consequently, MOOCs should be acknowledged as a pedagogical innovation and recognized as change agents and…
3 CFR 8547 - Proclamation 8547 of August 20, 2010. Minority Enterprise Development Week, 2010
Code of Federal Regulations, 2011 CFR
2011-01-01
... capabilities, cultural competencies, and international partnerships needed in a 21st century economy. Minority Enterprise Development Week is anchored by the American legacy of entrepreneurial ambition and innovation. As... also recognize the diversity, determination, insight, and innovation of American businesses, and the...
Innovations in Student-Centered Interdisciplinary Teaching for General Education in Aging
ERIC Educational Resources Information Center
Damron-Rodriguez, JoAnn; Effros, Rita
2008-01-01
The University of California-Los Angeles (UCLA) General Education "Clusters" are innovations in student-centered undergraduate education focused on complex phenomena that require an interdisciplinary perspective. UCLA gerontology and geriatric faculty recognized the opportunity to introduce freshmen to the field of aging through this new…
Advanced Computing for Manufacturing.
ERIC Educational Resources Information Center
Erisman, Albert M.; Neves, Kenneth W.
1987-01-01
Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)
INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Maeno, T
Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
Benefits Innovations in Employee Behavioral Health.
Sherman, Bruce; Block, Lori
2017-01-01
More and more employers recognize the business impact of behavioral health concerns in the workplace. This article provides insights into some of the current innovations in behavioral health benefits, along with their rationale for development. Areas of innovation include conceptual and delivery models, technological advance- ments, tools for engaging employees and ways of quantifying the business value of behavioral health benefits. The rapid growth of innovative behavioral health services should provide employers with confidence that they can tailor a program best suited to their priorities, organizational culture and cost limitations.
Supercomputers Join the Fight against Cancer – U.S. Department of Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.
NAS-current status and future plans
NASA Technical Reports Server (NTRS)
Bailey, F. R.
1987-01-01
The Numerical Aerodynamic Simulation (NAS) has met its first major milestone, the NAS Processing System Network (NPSN) Initial Operating Configuration (IOC). The program has met its goal of providing a national supercomputer facility capable of greatly enhancing the Nation's research and development efforts. Furthermore, the program is fulfilling its pathfinder role by defining and implementing a paradigm for supercomputing system environments. The IOC is only the begining and the NAS Program will aggressively continue to develop and implement emerging supercomputer, communications, storage, and software technologies to strengthen computations as a critical element in supporting the Nation's leadership role in aeronautics.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
This document briefly describes the use of the CRAY supercomputers that are an integral part of the Supercomputing Network Subsystem of the Central Scientific Computing Complex at LaRC. Features of the CRAY supercomputers are covered, including: FORTRAN, C, PASCAL, architectures of the CRAY-2 and CRAY Y-MP, the CRAY UNICOS environment, batch job submittal, debugging, performance analysis, parallel processing, utilities unique to CRAY, and documentation. The document is intended for all CRAY users as a ready reference to frequently asked questions and to more detailed information contained in the vendor manuals. It is appropriate for both the novice and the experienced user.
Scaling of data communications for an advanced supercomputer network
NASA Technical Reports Server (NTRS)
Levin, E.; Eaton, C. K.; Young, Bruce
1986-01-01
The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
..., advisor, faculty member, and others as appropriate. The nomination letter(s) must communicate the... upon teacher (parent or legal guardian in the case of home schooled applicants), advisor, or faculty... innovative concept written by the student(s) being nominated (no page limit). All materials should be...
Training for Creativity and Innovation in Small Enterprises in Ethiopia
ERIC Educational Resources Information Center
Mihret Dessie, Wondifraw; Shumetie Ademe, Arega
2017-01-01
Policy makers recognize the role of small businesses in bringing about economic growth and reducing or eliminating poverty, and training can contribute significantly to this process. The present study adds to the small firm literature by examining whether training encourages small firms to be more creative and innovative. It does so by…
DOT National Transportation Integrated Search
2011-07-01
"Recognizing that no single solution will save the day for transportation in this rapidly urbanizing and increasingly complex world, a groundswell of : transportation innovation is arising worldwide. However, these innovations are rarely linked and o...
ERIC Educational Resources Information Center
Scogin, Stephen C.
2016-01-01
"PlantingScience" is an award-winning program recognized for its innovation and use of computer-supported scientist mentoring. Science learners work on inquiry-based experiments in their classrooms and communicate asynchronously with practicing plant scientist-mentors about the projects. The purpose of this study was to identify specific…
ERIC Educational Resources Information Center
Adams, Carolyn D.; Hinojosa, Sara; Armstrong, Kathleen; Takagishi, Jennifer; Dabrow, Sharon
2016-01-01
This article discusses an innovative example of integrated care in which doctoral level school psychology interns and residents worked alongside pediatric residents and pediatricians in the primary care settings to jointly provide services to patients. School psychologists specializing in pediatric health are uniquely trained to recognize and…
School Nurse Book Clubs: An Innovative Strategy for Lifelong Learning
ERIC Educational Resources Information Center
Greenawald, Deborah A.; Adams, Theresa M.
2008-01-01
Recognizing the ongoing need for continuing education for school nurses, the authors discuss the use of school nurse book clubs as an innovative lifelong-learning strategy. Current research supports the use of literature in nursing education. This article discusses the benefits of book club participation for school nurses and includes suggested…
Roadrunner Supercomputer Breaks the Petaflop Barrier
Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin
2017-12-09
At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1
QCD on the BlueGene/L Supercomputer
NASA Astrophysics Data System (ADS)
Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.
2005-03-01
In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.
Supercomputer Issues from a University Perspective.
ERIC Educational Resources Information Center
Beering, Steven C.
1984-01-01
Discusses issues related to the access of and training of university researchers in using supercomputers, considering National Science Foundation's (NSF) role in this area, microcomputers on campuses, and the limited use of existing telecommunication networks. Includes examples of potential scientific projects (by subject area) utilizing…
A Decade-long Continental-Scale Convection-Resolving Climate Simulation on GPUs
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph
2016-04-01
The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. Using horizontal grid spacings of O(1km), they allow to explicitly resolve deep convection leading to an improved representation of the water cycle. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer-designs that involve conventional multicore CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation using the GPU-enabled COSMO version. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss the performance of the convection-resolving modeling approach on the European scale. Specifically we focus on the annual cycle of convection in Europe, on the organization of convective clouds and on the verification of hourly rainfall with various high resolution datasets.
2013 Schroth faces of the future symposium to highlight early career professionals in Mycology
USDA-ARS?s Scientific Manuscript database
The 2013 Schroth Faces of the Future symposium was created to recognize early career professionals (those within 10 years of graduation) who represent the future in their field via innovative research. For this year, future faces in mycology research were recognized. Drs. Jason Slot, Erica Goss, Jam...
Kirschling, Jane Marie; Erickson, Jeanette Ives
2010-09-01
To describe the benefits and barriers associated with practice-academe partnerships and introduce Sigma Theta Tau International's (STTI's) Practice-Academe Innovative Collaboration Award and the 2009 award recipients. In 2008, STTI created the CNO-Dean Advisory Council and charged it with reviewing the state of practice-academe collaborations and developing strategies for optimizing how chief nursing officers (CNOs) and deans work together to advance the profession and discipline of nursing. The Council, in turn, developed the Practice-Academe Innovative Collaboration Award to encourage collaboration across sectors, recognize innovative collaborative efforts, and spotlight best practices. A call for award submissions resulted in 24 applications from around the globe. An award winner and seven initiatives receiving honorable mentions were selected. The winning initiatives reflect innovative academe-service partnerships that advance evidence-based practice, nursing education, nursing research, and patient care. The proposals were distinguished by their collaborators' shared vision and unity of purpose, ability to leverage strengths and resources, and willingness to recognize opportunities and take risks. By partnering with one another, nurses in academe and in service settings can directly impact nursing education and practice, often effecting changes and achieving outcomes that are more extensive and powerful than could be achieved by working alone. The award-winning initiatives represent best practices for bridging the practice-academe divide and can serve as guides for nurse leaders in both settings.
Scientific Computing Strategic Plan for the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Eric Todd
Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drugan, C.
2009-12-07
The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P systemmore » at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.« less
NASA Astrophysics Data System (ADS)
Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.
2016-12-01
Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we will show the latest simulation results using the petascale supercomputer and problems from the use of these supercomputer systems.
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Finite element methods on supercomputers - The scatter-problem
NASA Technical Reports Server (NTRS)
Loehner, R.; Morgan, K.
1985-01-01
Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.
Code IN Exhibits - Supercomputing 2000
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)
2000-01-01
The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.
NSF Establishes First Four National Supercomputer Centers.
ERIC Educational Resources Information Center
Lepkowski, Wil
1985-01-01
The National Science Foundation (NSF) has awarded support for supercomputer centers at Cornell University, Princeton University, University of California (San Diego), and University of Illinois. These centers are to be the nucleus of a national academic network for use by scientists and engineers throughout the United States. (DH)
Library Services in a Supercomputer Center.
ERIC Educational Resources Information Center
Layman, Mary
1991-01-01
Describes library services that are offered at the San Diego Supercomputer Center (SDSC), which is located at the University of California at San Diego. Topics discussed include the user population; online searching; microcomputer use; electronic networks; current awareness programs; library catalogs; and the slide collection. A sidebar outlines…
Probing the cosmic causes of errors in supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Cosmic rays from outer space are causing errors in supercomputers. The neutrons that pass through the CPU may be causing binary data to flip leading to incorrect calculations. Los Alamos National Laboratory has developed detectors to determine how much data is being corrupted by these cosmic particles.
Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division
2016-06-01
Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
ERIC Educational Resources Information Center
Pellerin, Robert; Hadaya, Pierre
2008-01-01
Recognizing the need to teach ERP implementation and business process reengineering (BPR) concepts simultaneously, as well as the pedagogical limitations of the case teaching method and simulation tools, the objective of this study is to propose a new framework and an innovative teaching approach to improve the ERP training experience for IS…
ERIC Educational Resources Information Center
Simha, Rahul; Teodorescu, Raluca
2017-01-01
In an academic world driven by student ratings and publication counts, faculty members are discouraged from exploring new pedagogical ideas because exploration takes time and often goes unrecognized. The contrast with research is striking: everyone is expected to explore and innovate in research, whereas very few make exploration in teaching their…
Technology's Past: America's Industrial Revolution and the People Who Delivered the Goods.
ERIC Educational Resources Information Center
Karwatka, Dennis
This book presents illustrated profiles of 76 individuals and 2 notable vehicles (the winner of the first around-the-world car race in 1908 and the first steam locomotive in the United States). It includes such recognized innovators as Alexander Graham Bell (inventor of the telephone), George Washington Carver (agricultural products innovator),…
Military Innovation in the New Normal
2015-04-13
fundamentally flawed. Having Soldiers with regional affiliation, cultural appreciation, and language proficiency makes sense . The concept simply fails in...36 Recognizing the urgency of this training deficiency, both First and Second Marine Divisions published their respective...Witch Hunt,” http://www.huffingtonpost.com/ sen -barbara- boxer/the-gops-benghazi-witch-h_b_5315857.html (accessed 28 Jan 2015). Military Innovation in
ERIC Educational Resources Information Center
Dailey, Debbie; Cotabish, Alicia; Jackson, Nykela
2018-01-01
Present and future challenges in our society demand a solid science, technology, engineering, and mathematics (STEM) knowledge base, innovative thinking, and the ability to ask the right questions to generate multiple solutions. To prepare innovators to meet these challenges, we must recognize and develop their talents. This advancement and growth…
Dillon, Judy; Norris Himes, Judy; Reynolds, Kristine; Schirm, Victoria
2018-03-01
This community nursing partnership for student health is a well-recognized innovation, regionally and statewide. The initiative exemplifies 1 department of nursing's commitment to community involvement that originated from the forward thinking of nurse leaders. The journey to engaging intraprofessional partners and firmly establishing the partnership within the community is described.
Igniting Innovation: Colleges Get Creative to Meet Persistent Challenges in Race to the Top
ERIC Educational Resources Information Center
Violino, Bob
2012-01-01
When the going gets tough, the tough get...innovative. That's the approach some forward-thinking community college leaders are taking as their institutions set about the task of restocking the nation's workforce in the face of historic enrollments and severe, if seemingly unending, budget cuts. Recognized by the Aspen Institute as the most…
The Sky's the Limit When Super Students Meet Supercomputers.
ERIC Educational Resources Information Center
Trotter, Andrew
1991-01-01
In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…
NSF Says It Will Support Supercomputer Centers in California and Illinois.
ERIC Educational Resources Information Center
Strosnider, Kim; Young, Jeffrey R.
1997-01-01
The National Science Foundation will increase support for supercomputer centers at the University of California, San Diego and the University of Illinois, Urbana-Champaign, while leaving unclear the status of the program at Cornell University (New York) and a cooperative Carnegie-Mellon University (Pennsylvania) and University of Pittsburgh…
Access to Supercomputers. Higher Education Panel Report 69.
ERIC Educational Resources Information Center
Holmstrom, Engin Inel
This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…
NOAA announces significant investment in next generation of supercomputers
provide more timely, accurate weather forecasts. (Credit: istockphoto.com) Today, NOAA announced the next phase in the agency's efforts to increase supercomputing capacity to provide more timely, accurate turn will lead to more timely, accurate, and reliable forecasts." Ahead of this upgrade, each of
Developments in the simulation of compressible inviscid and viscous flow on supercomputers
NASA Technical Reports Server (NTRS)
Steger, J. L.; Buning, P. G.
1985-01-01
In anticipation of future supercomputers, finite difference codes are rapidly being extended to simulate three-dimensional compressible flow about complex configurations. Some of these developments are reviewed. The importance of computational flow visualization and diagnostic methods to three-dimensional flow simulation is also briefly discussed.
NASA Technical Reports Server (NTRS)
Smarr, Larry; Press, William; Arnett, David W.; Cameron, Alastair G. W.; Crutcher, Richard M.; Helfand, David J.; Horowitz, Paul; Kleinmann, Susan G.; Linsky, Jeffrey L.; Madore, Barry F.
1991-01-01
The applications of computers and data processing to astronomy are discussed. Among the topics covered are the emerging national information infrastructure, workstations and supercomputers, supertelescopes, digital astronomy, astrophysics in a numerical laboratory, community software, archiving of ground-based observations, dynamical simulations of complex systems, plasma astrophysics, and the remote control of fourth dimension supercomputers.
Bringing Breast Cancer Technologies to Market | Poster
CCR research is recognized in novel competition to encourage the commercialization of breast cancer inventions. Editor’s note: This article was originally published in CCR Connections (Volume 8, No. 1). The Breast Cancer Startup Challenge was named one of six finalists in the HHS Innovates Award Competition, and was one of three finalists recognized by HHS Secretary Sylvia
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
NASA Astrophysics Data System (ADS)
Noumaru, Junichi; Kawai, Jun A.; Schubert, Kiaina; Yagi, Masafumi; Takata, Tadafumi; Winegar, Tom; Scanlon, Tim; Nishida, Takuhiro; Fox, Camron; Hayasaka, James; Forester, Jason; Uchida, Kenji; Nakamura, Isamu; Tom, Richard; Koura, Norikazu; Yamamoto, Tadahiro; Tanoue, Toshiya; Yamada, Toru
2008-07-01
Subaru Telescope has recently replaced most equipment of Subaru Telescope Network II with the new equipment which includes 124TB of RAID system for data archive. Switching the data storage from tape to RAID enables users to access the data faster. The STN-III dropped some important components of STN-II, such as supercomputers, development & testing subsystem for Subaru Observation Control System, or data processing subsystem. On the other hand, we invested more computers to the remote operation system. Thanks to IT innovations, our LAN as well as the network between Hilo and summit were upgraded to gigabit network at the similar or even reduced cost from the previous system. As the result of the redesigning of the computer system by more focusing on the observatory operation, we greatly reduced the total cost for computer rental, purchase and maintenance.
An immersed boundary method for modeling a dirty geometry data
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph
2014-05-01
The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs. We present our redesign and porting approach as well as our experience and lessons learned. Furthermore, we discuss relevant performance benchmarks obtained on the new hybrid Cray XC30 system "Piz Daint" installed at the Swiss National Supercomputing Centre (CSCS), both in terms of time-to-solution as well as energy consumption. We will demonstrate a first set of short cloud-resolving climate simulations at the European-scale using the GPU-enabled COSMO prototype and elaborate our future plans on how to exploit this new model capability.
Supercomputer use in orthopaedic biomechanics research: focus on functional adaptation of bone.
Hart, R T; Thongpreda, N; Van Buskirk, W C
1988-01-01
The authors describe two biomechanical analyses carried out using numerical methods. One is an analysis of the stress and strain in a human mandible, and the other analysis involves modeling the adaptive response of a sheep bone to mechanical loading. The computing environment required for the two types of analyses is discussed. It is shown that a simple stress analysis of a geometrically complex mandible can be accomplished using a minicomputer. However, more sophisticated analyses of the same model with dynamic loading or nonlinear materials would require supercomputer capabilities. A supercomputer is also required for modeling the adaptive response of living bone, even when simple geometric and material models are use.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1988-01-01
A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.
NAS Technical Summaries, March 1993 - February 1994
NASA Technical Reports Server (NTRS)
1995-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993
NASA Technical Reports Server (NTRS)
1994-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
Building a Culture of Health Informatics Innovation and Entrepreneurship: A New Frontier.
Househ, Mowafa; Alshammari, Riyad; Almutairi, Mariam; Jamal, Amr; Alshoaib, Saleh
2015-01-01
Entrepreneurship and innovation within the health informatics (HI) scientific community are relatively sluggish when compared to other disciplines such as computer science and engineering. Healthcare in general, and specifically, the health informatics scientific community needs to embrace more innovative and entrepreneurial practices. In this paper, we explore the concepts of innovation and entrepreneurship as they apply to the health informatics scientific community. We also outline several strategies to improve the culture of innovation and entrepreneurship within the health informatics scientific community such as: (I) incorporating innovation and entrepreneurship in health informatics education; (II) creating strong linkages with industry and healthcare organizations; (III) supporting national health innovation and entrepreneurship competitions; (IV) creating a culture of innovation and entrepreneurship within healthcare organizations; (V) developing health informatics policies that support innovation and entrepreneurship based on internationally recognized standards; and (VI) develop an health informatics entrepreneurship ecosystem. With these changes, we conclude that embracing health innovation and entrepreneurship may be more readily accepted over the long-term within the health informatics scientific community.
ERIC Educational Resources Information Center
Morales-Avalos, José Ramón; Heredia-Escorza, Yolanda
2018-01-01
Learning and innovation's skills are increasingly recognized as key factors separating students who are prepared for more complex environments of life and work in the twenty-first century, and those who are not. The relationship between the industry and the academia is undoubtedly in Mexico and several countries nowadays a very important social…
Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.
ERIC Educational Resources Information Center
Kiernan, Vincent
1999-01-01
Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…
The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.
ERIC Educational Resources Information Center
Young, Jeffrey R.
1997-01-01
In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…
The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.
ERIC Educational Resources Information Center
Beckwith, E. Kenneth; Nelson, Christopher
1998-01-01
Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…
NASA Astrophysics Data System (ADS)
Schulthess, Thomas C.
2013-03-01
The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
NASA Astrophysics Data System (ADS)
Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.
2016-06-01
High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3 + 1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.
Grand challenges in mass storage: A systems integrators perspective
NASA Technical Reports Server (NTRS)
Lee, Richard R.; Mintz, Daniel G.
1993-01-01
Within today's much ballyhooed supercomputing environment, with its CFLOPS of CPU power, and Gigabit networks, there exists a major roadblock to computing success; that of Mass Storage. The solution to this mass storage problem is considered to be one of the 'Grand Challenges' facing the computer industry today, as well as long into the future. It has become obvious to us, as well as many others in the industry, that there is no clear single solution in sight. The Systems Integrator today is faced with a myriad of quandaries in approaching this challenge. He must first be innovative in approach, second choose hardware solutions that are volumetric efficient; high in signal bandwidth; available from multiple sources; competitively priced, and have forward growth extendibility. In addition he must also comply with a variety of mandated, and often conflicting software standards (GOSIP, POSIX, IEEE, MSRM 4.0, and others), and finally he must deliver a systems solution with the 'most bang for the buck' in terms of cost vs. performance factors. These quandaries challenge the Systems Integrator to 'push the envelope' in terms of his or her ingenuity and innovation on an almost daily basis. This dynamic is explored further, and an attempt to acquaint the audience with rational approaches to this 'Grand Challenge' is made.
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
Grachev, S V; Gorodnova, E A
2008-01-01
The authors presented an original material, devoted to first experience of teaching of theoretical bases of venture financing of scientifically-innovative projects in medical high school. The results and conclusions were based on data of the questionnaire performed by the authors. More than 90% of young scientist physicians recognized actuality of this problem for realization of their research work results into practice. Thus, experience of teaching of theoretical bases of venture financing of scientifically-innovative projects in medical high school proves reasonability of further development and inclusion the module "The venture financing of scientifically-innovative projects in biomedicine" in the training plan.
'Innovation' in health care coverage decisions: all talk and no substance?
Bryan, Stirling; Lee, Helen; Mitton, Craig
2013-01-01
There has been much discussion recently about 'innovation', or more precisely the lack of it, in pharmaceuticals and devices in health care. The concern has been expressed by national guideline bodies, such as the Common Drugs Review in Canada and the National Institute for Health & Clinical Excellence in the UK, applying strict cost-effectiveness criteria in their decision-making and, therefore, failing adequately to recognize the full benefits that come from innovation. In order to explore the legitimacy of such claims, we first define innovation, and second, explore the basis for assuming an independent and separable social value associated with innovation. We conclude that demands relating to innovation, such as relaxation of thresholds and premium prices for innovatory products, remain hollow until we have a compelling case on the demand side for a separable social value on 'innovation'. We see no such case currently.
NASA Astrophysics Data System (ADS)
Herrera, I.; Herrera, G. S.
2015-12-01
Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
The impact of the U.S. supercomputing initiative will be global
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Dona
2016-01-15
Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
Predicting Hurricanes with Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-01-01
Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron
1992-01-01
Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.
Advances in petascale kinetic plasma simulation with VPIC and Roadrunner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J; Albright, Brian J; Yin, Lin
2009-01-01
VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration andmore » modeling reconnection in magnetic confinement fusion experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curran, L.
1988-03-03
Interest has been building in recent months over the imminent arrival of a new class of supercomputer, called the ''supercomputer on a desk'' or the single-user model. Most observers expected the first such product to come from either of two startups, Ardent Computer Corp. or Stellar Computer Inc. But a surprise entry has shown up. Apollo Computer Inc. is launching a new work station this week that racks up an impressive list of industry first as it puts supercomputer power at the disposal of a single user. The new series 10000 from the Chelmsford, Mass., a company is built aroundmore » a reduced-instruction-set architecture that the company calls Prism, for parallel reduced-instruction-set multiprocessor. This article describes the 10000 and Prism.« less
NASA Technical Reports Server (NTRS)
Murman, E. M. (Editor); Abarbanel, S. S. (Editor)
1985-01-01
Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.
Agile Methods in Air Force Sustainment: Status and Outlook
2014-10-01
manage for it through iterations, anticipation and adaptation unleash creativity and innovation by recognizing that individuals are the ultimate...073561993X X X Agile Project Management : Creating Innovative Products – 2nd Edition Jim Highsmith ISBN 0321658396 X Agile Retrospectives...X Leading Change John Kotter ISBN 0875847471 X Leading Geeks: How to Manage and Lead the People Who Deliver Technology Paul Glen ISBN
2010-06-09
A group of Jet Propulsion Laboratory (JPL) engineers are recognized during the kick off of NASA's Summer of Innovation program at JPL in Pasadena, Calif., Thursday, June 10, 2010. Through the program, NASA will engage thousands of middle school students and teachers in stimulating math and science-based education programs with the goal of increasing the number of future scientists, mathematicians, and engineers. Photo Credit: (NASA/Bill Ingalls)
Seven Defense Priorities for the New Administration
2016-12-16
impacted by the inherent difference between very large companies , which the DoD relies on to produce systems at scale, and small companies , which...inhibit innovation. Many companies recognize this problem and attempt to isolate small development or prototyping organizations from the rest of the...between small and large companies can disincentivize both to effectively work with each other. Lastly, a disconnect can also occur when an innovative
Fireballs in the Sky: an Augmented Reality Citizen Science Program
NASA Astrophysics Data System (ADS)
Day, B. H.; Bland, P.; Sayers, R.
2017-12-01
Fireballs in the Sky is an innovative Australian citizen science program that connects the public with the research of the Desert Fireball Network (DFN). This research aims to understand the early workings of the solar system, and Fireballs in the Sky invites people around the world to learn about this science, contributing fireball sightings via a user-friendly augmented reality mobile app. Tens of thousands of people have downloaded the app world-wide and participated in the science of meteoritics. The Fireballs in the Sky app allows users to get involved with the Desert Fireball Network research, supplementing DFN observations and providing enhanced coverage by reporting their own meteor sightings to DFN scientists. Fireballs in the Sky reports are used to track the trajectories of meteors - from their orbit in space to where they might have landed on Earth. Led by Phil Bland at Curtin University in Australia, the Desert Fireball Network (DFN) uses automated observatories across Australia to triangulate trajectories of meteorites entering the atmosphere, determine pre-entry orbits, and pinpoint their fall positions. Each observatory is an autonomous intelligent imaging system, taking 1000×36Megapixel all-sky images throughout the night, using neural network algorithms to recognize events. They are capable of operating for 12 months in a harsh environment, and store all imagery collected. We developed a completely automated software pipeline for data reduction, and built a supercomputer database for storage, allowing us to process our entire archive. The DFN currently stands at 50 stations distributed across the Australian continent, covering an area of 2.5 million km^2. Working with DFN's partners at NASA's Solar System Exploration Research Virtual Institute, the team is expanding the network beyond Australia to locations around the world. Fireballs in the Sky allows a growing public base to learn about and participate in this exciting research.
Fireballs in the Sky: An Augmented Reality Citizen Science Program
NASA Technical Reports Server (NTRS)
Day, Brian
2017-01-01
Fireballs in the Sky is an innovative Australian citizen science program that connects the public with the research of the Desert Fireball Network (DFN). This research aims to understand the early workings of the solar system, and Fireballs in the Sky invites people around the world to learn about this science, contributing fireball sightings via a user-friendly augmented reality mobile app. Tens of thousands of people have downloaded the app world-wide and participated in the science of meteoritics. The Fireballs in the Sky app allows users to get involved with the Desert Fireball Network research, supplementing DFN observations and providing enhanced coverage by reporting their own meteor sightings to DFN scientists. Fireballs in the Sky reports are used to track the trajectories of meteors - from their orbit in space to where they might have landed on Earth. Led by Phil Bland at Curtin University in Australia, the Desert Fireball Network (DFN) uses automated observatories across Australia to triangulate trajectories of meteorites entering the atmosphere, determine pre-entry orbits, and pinpoint their fall positions. Each observatory is an autonomous intelligent imaging system, taking 1000 by 36 megapixel all-sky images throughout the night, using neural network algorithms to recognize events. They are capable of operating for 12 months in a harsh environment, and store all imagery collected. We developed a completely automated software pipeline for data reduction, and built a supercomputer database for storage, allowing us to process our entire archive. The DFN currently stands at 50 stations distributed across the Australian continent, covering an area of 2.5 million square kilometers. Working with DFN's partners at NASA's Solar System Exploration Research Virtual Institute, the team is expanding the network beyond Australia to locations around the world. Fireballs in the Sky allows a growing public base to learn about and participate in this exciting research.
Turbulent flows over superhydrophobic surfaces with shear-dependent slip length
NASA Astrophysics Data System (ADS)
Khosh Aghdam, Sohrab; Seddighi, Mehdi; Ricco, Pierre
2015-11-01
Motivated by recent experimental evidence, shear-dependent slip length superhydrophobic surfaces are studied. Lyapunov stability analysis is applied in a 3D turbulent channel flow and extended to the shear-dependent slip-length case. The feedback law extracted is recognized for the first time to coincide with the constant-slip-length model widely used in simulations of hydrophobic surfaces. The condition for the slip parameters is found to be consistent with the experimental data and with values from DNS. The theoretical approach by Fukagata (PoF 18.5: 051703) is employed to model the drag-reduction effect engendered by the shear-dependent slip-length surfaces. The estimated drag-reduction values are in very good agreement with our DNS data. For slip parameters and flow conditions which are potentially realizable in the lab, the maximum computed drag reduction reaches 50%. The power spent by the turbulent flow on the walls is computed, thereby recognizing the hydrophobic surfaces as a passive-absorbing drag-reduction method, as opposed to geometrically-modifying techniques that do not consume energy, e.g. riblets, hence named passive-neutral. The flow is investigated by visualizations, statistical analysis of vorticity and strain rates, and quadrants of the Reynolds stresses. Part of this work was funded by Airbus Group. Simulations were performed on the ARCHER Supercomputer (UKTC Grant).
48 CFR 2115.404-71 - Profit analysis factors.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., enrollees, beneficiaries, and Congress as measures of economical and efficient contract performance. This..., etc., having viability to the Program at large. Improvements and innovations recognized and rewarded...
48 CFR 2115.404-71 - Profit analysis factors.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., enrollees, beneficiaries, and Congress as measures of economical and efficient contract performance. This..., etc., having viability to the Program at large. Improvements and innovations recognized and rewarded...
48 CFR 2115.404-71 - Profit analysis factors.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., enrollees, beneficiaries, and Congress as measures of economical and efficient contract performance. This..., etc., having viability to the Program at large. Improvements and innovations recognized and rewarded...
48 CFR 2115.404-71 - Profit analysis factors.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., enrollees, beneficiaries, and Congress as measures of economical and efficient contract performance. This..., etc., having viability to the Program at large. Improvements and innovations recognized and rewarded...
The magnitude of innovation and its evolution in social animals.
Arbilly, Michal; Laland, Kevin N
2017-02-08
Innovative behaviour in animals, ranging from invertebrates to humans, is increasingly recognized as an important topic for investigation by behavioural researchers. However, what constitutes an innovation remains controversial, and difficult to quantify. Drawing on a broad definition whereby any behaviour with a new component to it is an innovation, we propose a quantitative measure, which we call the magnitude of innovation , to describe the extent to which an innovative behaviour is novel. This allows us to distinguish between innovations that are a slight change to existing behaviours (low magnitude), and innovations that are substantially different (high magnitude). Using mathematical modelling and evolutionary computer simulations, we explored how aspects of social interaction, cognition and natural selection affect the frequency and magnitude of innovation. We show that high-magnitude innovations are likely to arise regularly even if the frequency of innovation is low, as long as this frequency is relatively constant, and that the selectivity of social learning and the existence of social rewards, such as prestige and royalties, are crucial for innovative behaviour to evolve. We suggest that consideration of the magnitude of innovation may prove a useful tool in the study of the evolution of cognition and of culture. © 2017 The Author(s).
The magnitude of innovation and its evolution in social animals
2017-01-01
Innovative behaviour in animals, ranging from invertebrates to humans, is increasingly recognized as an important topic for investigation by behavioural researchers. However, what constitutes an innovation remains controversial, and difficult to quantify. Drawing on a broad definition whereby any behaviour with a new component to it is an innovation, we propose a quantitative measure, which we call the magnitude of innovation, to describe the extent to which an innovative behaviour is novel. This allows us to distinguish between innovations that are a slight change to existing behaviours (low magnitude), and innovations that are substantially different (high magnitude). Using mathematical modelling and evolutionary computer simulations, we explored how aspects of social interaction, cognition and natural selection affect the frequency and magnitude of innovation. We show that high-magnitude innovations are likely to arise regularly even if the frequency of innovation is low, as long as this frequency is relatively constant, and that the selectivity of social learning and the existence of social rewards, such as prestige and royalties, are crucial for innovative behaviour to evolve. We suggest that consideration of the magnitude of innovation may prove a useful tool in the study of the evolution of cognition and of culture. PMID:28179515
None
2018-05-01
A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed "Ice Storm" this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.
Open Skies Project Computational Fluid Dynamic Analysis
1994-03-01
109 -. -_ _ 9 . CONCLUSIONSI1 f 10. LIST OF REFERENCES _________ ___________112 APPENDIX A: Transition Prediction __________________116 B...Behind the Open Skies Plate 20 8. VSAERO Results on the Alternate Fairing 21 9 . Centerline Cp Comparisons 22 10. VSAERO Wing Effects Study Centerline C...problems. The assistance Mrs. Mary Ann Mages, at Kirtland Supercomputer Center ( PL /SCPR) gave by setting a precedent for supercomputer account
Porting Ordinary Applications to Blue Gene/Q Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy
2015-08-31
Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack
20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratorymore » (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.« less
STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.
Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X
2009-08-01
This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.
Japanese project aims at supercomputer that executes 10 gflops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burskey, D.
1984-05-03
Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less
The role of innovative global institutions in linking knowledge and action.
van Kerkhoff, Lorrae; Szlezák, Nicole A
2016-04-26
It is becoming increasingly recognized that our collective ability to tackle complex problems will require the development of new, adaptive, and innovative institutional arrangements that can deal with rapidly changing knowledge and have effective learning capabilities. In this paper, we applied a knowledge-systems perspective to examine how institutional innovations can affect the generation, sharing, and application of scientific and technical knowledge. We report on a case study that examined the effects that one large innovative organization, The Global Fund to Fight AIDS, Tuberculosis, and Malaria, is having on the knowledge dimensions of decision-making in global health. The case study shows that the organization created demand for new knowledge from a range of actors, but it did not incorporate strategies for meeting this demand into their own rules, incentives, or procedures. This made it difficult for some applicants to meet the organization's dual aims of scientific soundness and national ownership of projects. It also highlighted that scientific knowledge needed to be integrated with managerial and situational knowledge for success. More generally, the study illustrates that institutional change targeting implementation can also significantly affect the dynamics of knowledge creation (learning), access, distribution, and use. Recognizing how action-oriented institutions can affect these dynamics across their knowledge system can help institutional designers build more efficient and effective institutions for sustainable development.
Albin, Ramona C
2010-12-01
The framers of the U.S. Constitution believed that intellectual property rights were crucial to scientific advancement. Yet, the framers also recognized the need to balance innovation, privatization, and public use. The courts' expansion of patent protection for biotechnology innovations in the last 30 years raises the question whether the patent system effectively balances these concerns. While the question is not new, only through a thorough and thoughtful examination of these issues can the current system be evaluated. It is then a policy decision for Congress if any change is necessary.
Japanese supercomputer technology.
Buzbee, B L; Ewald, R H; Worlton, W J
1982-12-17
Under the auspices of the Ministry for International Trade and Industry the Japanese have launched a National Superspeed Computer Project intended to produce high-performance computers for scientific computation and a Fifth-Generation Computer Project intended to incorporate and exploit concepts of artificial intelligence. If these projects are successful, which appears likely, advanced economic and military research in the United States may become dependent on access to supercomputers of foreign manufacture.
Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance
Zgurskaya, Helen; Smith, Jeremy
2018-06-13
ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
Nwaka, Solomon; Ochem, Alexander; Besson, Dominique; Ramirez, Bernadette; Fakorede, Foluke; Botros, Sanaa; Inyang, Uford; Mgone, Charles; Adae-Mensah, Ivan; Konde, Victor; Nyasse, Barthelemy; Okole, Blessed; Guantai, Anastasia; Loots, Glaudina; Atadja, Peter; Ndumbe, Peter; Sanou, Issa; Olesen, Ole; Ridley, Robert; Ilunga, Tshinko
2012-07-27
A pool of 38 pan-African Centres of Excellence (CoEs) in health innovation has been selected and recognized by the African Network for Drugs and Diagnostics Innovation (ANDI), through a competitive criteria based process. The process identified a number of opportunities and challenges for health R&D and innovation in the continent: i) it provides a direct evidence for the existence of innovation capability that can be leveraged to fill specific gaps in the continent; ii) it revealed a research and financing pattern that is largely fragmented and uncoordinated, and iii) it highlights the most frequent funders of health research in the continent. The CoEs are envisioned as an innovative network of public and private institutions with a critical mass of expertise and resources to support projects and a variety of activities for capacity building and scientific exchange, including hosting fellows, trainees, scientists on sabbaticals and exchange with other African and non-African institutions.
2012-01-01
A pool of 38 pan-African Centres of Excellence (CoEs) in health innovation has been selected and recognized by the African Network for Drugs and Diagnostics Innovation (ANDI), through a competitive criteria based process. The process identified a number of opportunities and challenges for health R&D and innovation in the continent: i) it provides a direct evidence for the existence of innovation capability that can be leveraged to fill specific gaps in the continent; ii) it revealed a research and financing pattern that is largely fragmented and uncoordinated, and iii) it highlights the most frequent funders of health research in the continent. The CoEs are envisioned as an innovative network of public and private institutions with a critical mass of expertise and resources to support projects and a variety of activities for capacity building and scientific exchange, including hosting fellows, trainees, scientists on sabbaticals and exchange with other African and non-African institutions. PMID:22838941
Impact on Learning Awards, 2001.
ERIC Educational Resources Information Center
School Planning & Management, 2001
2001-01-01
Recognizes 14 architectural firms for their innovative designs, which helped solve real-world problems in K-12 school facilities. Designs for retrofits, safety and security, and specialized learning environments are profiled and critiqued. (GR)
Next Generation Security for the 10,240 Processor Columbia System
NASA Technical Reports Server (NTRS)
Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)
2005-01-01
This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.
2012-09-01
PAGE INTENTIONALLY LEFT BLANK xv LIST OF ACRONYMS AND ABBREVIATIONS CAE Component Acquisition Executive COTS Commercial Off-The-Shelf DARPA...and reduce program lifecycle costs by expanding the pool of vendors and incorporating small innovative high -tech businesses in defense IT...acquisition. Particularly within the high -tech IT sector, small businesses have been consistently recognized as exceptional resources for the research and
Performance measures for public transit mobility management.
DOT National Transportation Integrated Search
2011-12-01
"Mobility management is an innovative approach for managing and delivering coordinated public : transportation services that embraces the full family of public transit options. At a national level, there are : currently no industry recognized perform...
CFD applications: The Lockheed perspective
NASA Technical Reports Server (NTRS)
Miranda, Luis R.
1987-01-01
The Numerical Aerodynamic Simulator (NAS) epitomizes the coming of age of supercomputing and opens exciting horizons in the world of numerical simulation. An overview of supercomputing at Lockheed Corporation in the area of Computational Fluid Dynamics (CFD) is presented. This overview will focus on developments and applications of CFD as an aircraft design tool and will attempt to present an assessment, withing this context, of the state-of-the-art in CFD methodology.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, U.A.; Baumle, B.; Kohler, P.
1992-10-01
Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.
A Heterogeneous High-Performance System for Computational and Computer Science
2016-11-15
Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under Graduate students supported...team of research faculty from the departments of computer science and natural science at Bowie State University. The supercomputer is not only to...accelerated HPC systems. The supercomputer is also ideal for the research conducted in the Department of Natural Science, as research faculty work on
LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments
2015-11-20
1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming
Advanced Numerical Techniques of Performance Evaluation. Volume 1
1990-06-01
system scheduling3thread. The scheduling thread then runs any other ready thread that can be found. A thread can only sleep or switch out on itself...Polychronopoulos and D.J. Kuck. Guided Self- Scheduling : A Practical Scheduling Scheme for Parallel Supercomputers. IEEE Transactions on Computers C...Kuck 1987] C.D. Polychronopoulos and D.J. Kuck. Guided Self- Scheduling : A Practical Scheduling Scheme for Parallel Supercomputers. IEEE Trans. on Comp
2014-03-01
wind turbines from General Electric. China recognizes the issues with IPR but it is something that will take time to fix. It will be a significant...Large aircraft Large-scale oil and gas exploration Manned space, including lunar exploration Next-generation broadband wireless ...circuits, and building an innovation system for China’s integrated circuit (IC) manufacturing industry. 3. New generation broadband wireless mobile
2010-03-01
experience in the book Administration Industrielle et Générale, where he developed his fourteen principles of administration. Fayol claimed that...is at the heart of succession planning. The LAFD should recognize the innovation and new ideas of our young generation, and incorporate them into...created, and shared in an organizational context; to foster creativity and innovation for competitive advantage. According to Nonaka, knowledge is
ERIC Educational Resources Information Center
Carroll, Heather; Chandrashekhar, Shwetha; Huang, Danny; Kim, David; Liu, Peter
2015-01-01
In light of the enormous changes unfolding in the higher education landscape, we don't have to look too far to recognize evidence of the transformation and redefinition of the construct of both teaching and learning in the information age. With a growing focus on teaching and learning at all levels of post-secondary institutions, innovation is…
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.
NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations
2016-11-23
The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.
A Long History of Supercomputing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.
Introducing Argonne’s Theta Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Theta, the Argonne Leadership Computing Facility’s (ALCF) new Intel-Cray supercomputer, is officially open to the research community. Theta’s massively parallel, many-core architecture puts the ALCF on the path to Aurora, the facility’s future Intel-Cray system. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.
ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.
Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping
2018-04-27
A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.
Graphics supercomputer for computational fluid dynamics research
NASA Astrophysics Data System (ADS)
Liaw, Goang S.
1994-11-01
The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.
Modelling sodium cobaltate by mapping onto magnetic Ising model
NASA Astrophysics Data System (ADS)
Gemperline, Patrick; Morris, David Jonathan Pryce
Fast Ion conductors are a class of crystals that are frequently used as battery materials, especially in smart phones, laptops, and other portable devices. Sodium Cobalt Oxide, NaxCoO2, falls into this class of crystals, but is unique because it possesses the ability to act as a thermoelectric material and a superconductor at different concentrations of Na+. The crystal lattice is mapped onto an Ising Magnetic Spin model and a Monte-Carol Simulation is used to find the most energetically favorable configuration of spins. This spin configuration is mapped back to the crystal lattice resulting in the most stable crystal structure of Sodium Cobalt Oxide at various concentrations. Knowing the atomic structures of the crystals will aid in the research of the materials capabilities and the possible uses of the material commercially. Ohio Supercomputer Center. 1987. Ohio Supercomputer Center. Columbus OH: Ohio Supercomputer Center. and the John Hauck Foundation.
Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbach, Carsten; Frings, Wolfgang
2013-02-22
This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP.more » The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.« less
TU-C-BRF-01: Innovation in Medical Physics and Engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, R; Pelc, N; Jaffray, D
We seek to heighten the awareness of the role of research and innovation that leads to clinical advances in the field of medical physics and engineering. Marie Curie (discovery and use of radium) and Harold Johns (Co-60 tele-therapy) in radiotherapy, and pioneers in imaging (Allan Cormack and Godfrey Hounsfield for the CT and Paul Lauterbur, Peter Mansfield for MRI, etc.) were scientists often struggling against great odds. Examples of more recent innovations that are clearly benefitting our patients include IMRT, Image Guided Radiation Therapy and Surgery, Particle Therapy, Quantitative imaging, amongst others.We would also like to highlight the fact thatmore » not all of the discovery and engineering that we benefit from in today’s world, was performed at research institutions alone. Rather, companies often tread new ground at financial and reputational risk. Indeed the strength of the private sector is needed in order to bring about new advances to our practice. The keys to long term success in research and development may very well include more public and private research spending. But, when more investigators are funded, we also need to recognize that there needs to be a willingness on the part of the funding institutions, academic centers and investigators to risk failure for the greater potential achievements in innovation and research. The speakers will provide examples and insight into the fields of innovation and research in medical physics from their own first hand experiences. Learning Objectives: To obtain an understanding of the importance of research and development towards advances in physics in medicine. To raise awareness of the role of interdisciplinary collaborations in translational research and innovation. To highlight the importance of entrepreneurships and industrial-institutional research partnerships in fostering new ideas and their commercial success. To recognize and account for the risk of failure for the greater potential achievements in innovation and research.« less
10 rules for managing global innovation.
Wilson, Keeley; Doz, Yves L
2012-10-01
More and more companies recognize that their dispersed, global operations are a treasure trove of ideas and capabilities for innovation. But it's proving harder than expected to unearth those ideas or exploit those capabilities. Part of the problem is that companies manage global innovation the same way they manage traditional, single-location projects. Single-location projects draw on a large reservoir of tacit knowledge, shared context, and trust that global projects lack. The management challenge, therefore, is to replicate the positive aspects of colocation while harnessing the opportunities of dispersion. In this article, Insead's Wilson and Doz draw on research into global strategy and innovation to present a set of guidelines for setting up and managing global innovation. They explore in detail the challenges that make global projects inherently different and show how these can be overcome by applying superior project management skills across teams, fostering a strong collaborative culture, and using a robust array of communications tools.
NASA Technical Reports Server (NTRS)
Baker, John
2010-01-01
Among the fascinating phenomena predicted by General Relativity, Einstein's theory of gravity, black holes and gravitational waves, are particularly important in astronomy. Though once viewed as a mathematical oddity, black holes are now recognized as the central engines of many of astronomy's most energetic cataclysms. Gravitational waves, though weakly interacting with ordinary matter, may be observed with new gravitational wave telescopes, opening a new window to the universe. These observations promise a direct view of the strong gravitational dynamics involving dense, often dark objects, such as black holes. The most powerful of these events may be merger of two colliding black holes. Though dark, these mergers may briefly release more energy that all the stars in the visible universe, in gravitational waves. General relativity makes precise predictions for the gravitational-wave signatures of these events, predictions which we can now calculate with the aid of supercomputer simulations. These results provide a foundation for interpreting expect observations in the emerging field of gravitational wave astronomy.
Neuropeptide Signaling Networks and Brain Circuit Plasticity.
McClard, Cynthia K; Arenkiel, Benjamin R
2018-01-01
The brain is a remarkable network of circuits dedicated to sensory integration, perception, and response. The computational power of the brain is estimated to dwarf that of most modern supercomputers, but perhaps its most fascinating capability is to structurally refine itself in response to experience. In the language of computers, the brain is loaded with programs that encode when and how to alter its own hardware. This programmed "plasticity" is a critical mechanism by which the brain shapes behavior to adapt to changing environments. The expansive array of molecular commands that help execute this programming is beginning to emerge. Notably, several neuropeptide transmitters, previously best characterized for their roles in hypothalamic endocrine regulation, have increasingly been recognized for mediating activity-dependent refinement of local brain circuits. Here, we discuss recent discoveries that reveal how local signaling by corticotropin-releasing hormone reshapes mouse olfactory bulb circuits in response to activity and further explore how other local neuropeptide networks may function toward similar ends.
Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozacik, Stephen
Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.
[Innovative medicinal products: the new criteria of the Italian Medicines Agency.
Mammarella, Federica; Tafuri, Giovanni
2018-05-01
The Italian Medicines Agency (AIFA), which has the dual function of a regulatory and a reimbursement authority, has recently established new criteria to define innovative medicinal products. Indeed, the decision making process to grant the innovative status is based on the evaluation of the unmet medical need, the added therapeutic value compared to existing therapeutic options and the overall quality of clinical evidence, which is assessed based on the GRADE system. Following this evaluation, if a medicinal product is granted the status of "full innovativeness" for a specific therapeutic indication, its manufacturer can access dedicated yearly funds amounting to 500 million Euros each, depending on the type of medicine (one fund for oncology, the other for all other innovative medicinal products). Alternatively, the product can be granted the status of "conditional innovativeness" which allows immediate access to all Regional formularies, with no additional re-assessments at the local level. The third possible outcome is that no innovativeness is recognized. Starting from January 2018, a full report explaining the rationale for the Agency Committee's decision is made publicly available on the AIFA's website.
Our science matters - and is recognized
USDA-ARS?s Scientific Manuscript database
The Presidential Task Force on Agriculture and Rural Prosperity listed five key indicators of rural prosperity: e-Connectivity for Rural America, Improving Quality of Life, Supporting a Rural Workforce, Harnessing Technological Innovation, and Economic Development (https://www.usda.gov/sites/default...
Project Spectrum: An Innovative Assessment Alternative.
ERIC Educational Resources Information Center
Krechevsky, Mara
1991-01-01
Project Spectrum attempts to reconceptualize the traditional linguistic and logical/mathematical bases of intelligence. Spectrum blurs the line between curriculum and assessment, embeds assessment in meaningful, real-world activities, uses "intelligence-fair" measures, emphasizes children's strengths, and recognizes the stylistic…
Development of seismic tomography software for hybrid supercomputers
NASA Astrophysics Data System (ADS)
Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton
2015-04-01
Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
NASA Technical Reports Server (NTRS)
1986-01-01
Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.
A Long History of Supercomputing
Grider, Gary
2018-06-13
As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nationâs first computer to building the first machine to break the petaflop barrier, Los Alamos holds many âfirstsâ in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.
2014-09-01
simulation time frame from 30 days to one year. This was enabled by porting the simulation to the Pleiades supercomputer at NASA Ames Research Center, a...including the motivation for changes to our past approach. We then present the software implementation (3) on the NASA Ames Pleiades supercomputer...significantly updated since last year’s paper [25]. The main incentive for that was the shift to a highly parallel approach in order to utilize the Pleiades
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Science and Technology Review June 2000
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Pruneda, J.H.
2000-06-01
This issue contains the following articles: (1) ''Accelerating on the ASCI Challenge''. (2) ''New Day Daws in Supercomputing'' When the ASCI White supercomputer comes online this summer, DOE's Stockpile Stewardship Program will make another significant advanced toward helping to ensure the safety, reliability, and performance of the nation's nuclear weapons. (3) ''Uncovering the Secrets of Actinides'' Researchers are obtaining fundamental information about the actinides, a group of elements with a key role in nuclear weapons and fuels. (4) ''A Predictable Structure for Aerogels''. (5) ''Tibet--Where Continents Collide''.
Role of HPC in Advancing Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2004-01-01
On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.
PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations
NASA Astrophysics Data System (ADS)
Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.
2017-12-01
Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.
Heart Fibrillation and Parallel Supercomputers
NASA Technical Reports Server (NTRS)
Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.
1997-01-01
The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.
NASA Technical Reports Server (NTRS)
Guruswamy, Guru
2004-01-01
A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.
2017-12-08
Two rows of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS) contain more than 4,000 computer processors. Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
2017-12-08
This close-up view highlights one row—approximately 2,000 computer processors—of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS). Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
NASA Astrophysics Data System (ADS)
Wolff, J.; Jankov, I.; Beck, J.; Carson, L.; Frimel, J.; Harrold, M.; Jiang, H.
2016-12-01
It is well known that global and regional numerical weather prediction ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system for addressing the deficiencies in ensemble modeling is the use of stochastic physics to represent model-related uncertainty. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Perturbation of Physics Tendencies (SPPT), or some combination of all three. The focus of this study is to assess the model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) when using stochastic approaches. For this purpose, the test utilized a single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model, with ensemble members produced by employing stochastic methods. Parameter perturbations were employed in the Rapid Update Cycle (RUC) land surface model and Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary layer scheme. Results will be presented in terms of bias, error, spread, skill, accuracy, reliability, and sharpness using the Model Evaluation Tools (MET) verification package. Due to the high level of complexity of running a frequently updating (hourly), high spatial resolution (3 km), large domain (CONUS) ensemble system, extensive high performance computing (HPC) resources were needed to meet this objective. Supercomputing resources were provided through the National Center for Atmospheric Research (NCAR) Strategic Capability (NSC) project support, allowing for a more extensive set of tests over multiple seasons, consequently leading to more robust results. Through the use of these stochastic innovations and powerful supercomputing at NCAR, further insights and advancements in ensemble forecasting at convection-permitting scales will be possible.
Scheduling Operations for Massive Heterogeneous Clusters
NASA Technical Reports Server (NTRS)
Humphrey, John; Spagnoli, Kyle
2013-01-01
High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.
Physicists and Economic Growth: Preparing the Next Generation
NASA Astrophysics Data System (ADS)
Arion, Douglas
2012-02-01
For many years it has been recognized that many physicists are ``hidden'' -- deep in the industrial world or holding positions not named ``physicist.'' In parallel with this phenomenon is the recognition that many new and innovative product ideas are, in fact, generated by physicists. There are many more ideas that could be brought to market to the benefit of both society and the inventor, but physicists don't often see themselves as the innovators and inventors that they actually are. A number of education programs have arisen to try to address this issue and to engender a greater entrepreneurial spirit in the scientific community. The ScienceWorks program at Carthage College was one of the first to do so, and has for nearly twenty years prepared undergraduate science majors to understand and practice innovation and value creation. Other programs, such as professional masters degrees, also serve to bridge the technical and business universes. As it is no doubt easier to teach a scientist the world of business than it is to teach a businessperson the world of physics, providing educational experiences in innovation and commercialization to physics students can have tremendous economic impact, and will also better prepare them for whatever career direction they may ultimately pursue, even if it is the traditional tenure-track university position. This talk will discuss education programs that have been effective at preparing physics students for the professional work environment, and some of the positive outcomes that have resulted. Also discussed will be the variety of opportunities and resources that exist for faculty and students to develop the skills, knowledge and abilities to recognize and successfully commercialize innovations.
Adaptation and Cultural Diffusion.
ERIC Educational Resources Information Center
Ormrod, Richard K.
1992-01-01
Explores the role of adaptation in cultural diffusion. Explains that adaptation theory recognizes the lack of independence between innovations and their environmental settings. Discusses testing and selection, modification, motivation, and cognition. Suggests that adaptation effects are pervasive in cultural diffusion but require a broader, more…
Code of Federal Regulations, 2011 CFR
2011-10-01
... of Energy policy recognizes that full utilization of the talents and capabilities of a diverse work... and enhance partnerships with small, small disadvantaged, women-owned small businesses, and... disadvantaged, women-owned small business, and educational activity; and to develop innovative strategies to...
Comparative Analysis Of River Conservation In The United States And South Africa
Both the United States and South Africa are recognized for their strong and innovative approaches to the conservation of river ecosystems. These national programs possess similar driving legislation and ecoregional classification schemes supported by comprehensive monitoring prog...
Multi-petascale highly efficient parallel supercomputer
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng
2015-07-14
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.
NASA Astrophysics Data System (ADS)
Landgrebe, Anton J.
1987-03-01
An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.
NASA Technical Reports Server (NTRS)
Landgrebe, Anton J.
1987-01-01
An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.
Antenna pattern control using impedance surfaces
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Liu, Kefeng
1992-01-01
During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.
Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization
NASA Technical Reports Server (NTRS)
Jones, James Patton; Nitzberg, Bill
1999-01-01
The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.
Data communication requirements for the advanced NAS network
NASA Technical Reports Server (NTRS)
Levin, Eugene; Eaton, C. K.; Young, Bruce
1986-01-01
The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
Henry, Heather F; Suk, William A
2017-03-01
Innovative devices and tools for exposure assessment and remediation play an integral role in preventing exposure to hazardous substances. New solutions for detecting and remediating organic, inorganic, and mixtures of contaminants can improve public health as a means of primary prevention. Using a public health prevention model, detection and remediation technologies contribute to primary prevention as tools to identify areas of high risk (e.g. contamination hotspots), to recognize hazards (bioassay tests), and to prevent exposure through contaminant cleanups. Primary prevention success is ultimately governed by the widespread acceptance of the prevention tool. And, in like fashion, detection and remediation technologies must convey technical and sustainability advantages to be adopted for use. Hence, sustainability - economic, environmental, and societal - drives innovation in detection and remediation technology. The National Institute of Health (NIH) National Institute of Environmental Health Sciences (NIEHS) Superfund Research Program (SRP) is mandated to advance innovative detection, remediation, and toxicity screening technology development through grants to universities and small businesses. SRP recognizes the importance of fast, accurate, robust, and advanced detection technologies that allow for portable real-time, on-site characterization, monitoring, and assessment of contaminant concentration and/or toxicity. Advances in non-targeted screening, biological-based assays, passive sampling devices (PSDs), sophisticated modeling approaches, and precision-based analytical tools are making it easier to quickly identify hazardous "hotspots" and, therefore, prevent exposures. Innovation in sustainable remediation uses a variety of approaches: in situ remediation; harnessing the natural catalytic properties of biological processes (such as bioremediation and phytotechnologies); and application of novel materials science (such as nanotechnology, advanced membranes, new carbon materials, and materials reuse). Collectively, the investment in new technologies shows promise to reduce the amount and toxicity of hazardous substances in the environment. This manuscript highlights SRP funded innovative devices and tools for exposure assessment and remediation of organic, inorganic, and mixtures of contaminants with a particular focus on sustainable technologies.
A History of High-Performance Computing
NASA Technical Reports Server (NTRS)
2006-01-01
Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Richard C.
2009-09-01
This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential ofmore » PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.« less
Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.
Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias
2011-01-01
The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.
Team learning and innovation in nursing, a review of the literature.
Timmermans, Olaf; Van Linge, Roland; Van Petegem, Peter; Van Rompaey, Bart; Denekens, Joke
2012-01-01
The capability to learn and innovate has been recognized as a key-factor for nursing teams to deliver high quality performance. Researchers suggest there is a relation between team-learning activities and changes in nursing teams throughout the implementation of novelties. A review of the literature was conducted in regard to the relation between team learning and implementation of innovations in nursing teams and to explore factors that contribute or hinder team learning. The search was limited to studies that were published in English or Dutch between 1998 and 2010. Eight studies were included in the review. The results of this review revealed that research on team learning and innovation in nursing is limited. The included studies showed moderate methodological quality and low levels of evidence. Team learning included processes to gather, process, and store information from different innovations within the nursing team and the prevalence of team-learning activities was contributed or hindered by individual and contextual factors. Further research is needed on the relation between team learning and implementation of innovations in nursing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Emmanouilidou, Maria
2015-01-01
The healthcare sector globally is confronted with increasing internal and external pressures that urge for a radical reform of health systems' status quo. The role of technological innovations such as Electronic Health Records (EHR) is recognized as instrumental in this transition process as it is expected to accelerate organizational innovations. This is why the widespread uptake of EHR systems is a top priority in the global healthcare agenda. The successful co-deployment though of EHR systems and organizational innovations within the context of secondary healthcare institutions is a complex and multifaceted issue. Existing research in the field has made little progress thus emphasizing the need for further research contribution that will incorporate a holistic perspective. This paper presents insights about the EHR-organizational innovation interplay from a public hospital in Greece into a socio-technical analytical framework providing a multilevel set of action points for the eHealth roadmap with worldwide relevance.
Innovative park-and-ride management for livable communities.
DOT National Transportation Integrated Search
2015-08-31
Park-and-ride (P&R) has been recognized as an effective way to tackle the challenge of the last-mile problem in public transportation, i.e., connecting transit stations to final destinations. Although the design and operations of P&R facilities have ...
Science 101: How Does Speech-Recognition Software Work?
ERIC Educational Resources Information Center
Robertson, Bill
2016-01-01
This column provides background science information for elementary teachers. Many innovations with computer software begin with analysis of how humans do a task. This article takes a look at how humans recognize spoken words and explains the origins of speech-recognition software.
Developing Leadership Capacity through Organizational Learning
ERIC Educational Resources Information Center
Buchanan, Julia
2008-01-01
The relationship of human development to leadership growth and organizational learning is becoming more significant as organizations recognize the value of skilled leadership. In order to foster collective intelligence and innovation in groups, leadership throughout an organization benefits from the understanding of processes involved in…
Rapid Response Skills Training
ERIC Educational Resources Information Center
Kelley-Winders, Anna Faye
2008-01-01
Mississippi Gulf Coast Community College's (MGCCC) long-term commitment to providing workforce training in a post-Katrina environment became a catalyst for designing short-term flexible educational opportunities. Providing nationally recognized skills training for the recovery/rebuilding of communities challenged the college to develop innovative,…
76 FR 56275 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... seeks to implement a Financial Capability Community Challenge to recognize and encourage innovation and effective practices of community-based approaches to enhance the financial capability of un- and underbanked... Number: 1505-NEW. Type of Review: New collection. Title: Financial Capability Community Challenge...
Collecting and Using Original Student Work.
ERIC Educational Resources Information Center
Farmer, Lesley S. J.
2001-01-01
Examines innovative ways for school libraries to collect organize, and make effective use of student work. Highlights include recognizing original work; student writing; student posters of favorite books or characters; databases for organizing information; videotaping of students' activities and presentations; electronic products; events;…
Making technological innovation work for sustainable development.
Anadon, Laura Diaz; Chan, Gabriel; Harley, Alicia G; Matus, Kira; Moon, Suerie; Murthy, Sharmila L; Clark, William C
2016-08-30
This paper presents insights and action proposals to better harness technological innovation for sustainable development. We begin with three key insights from scholarship and practice. First, technological innovation processes do not follow a set sequence but rather emerge from complex adaptive systems involving many actors and institutions operating simultaneously from local to global scales. Barriers arise at all stages of innovation, from the invention of a technology through its selection, production, adaptation, adoption, and retirement. Second, learning from past efforts to mobilize innovation for sustainable development can be greatly improved through structured cross-sectoral comparisons that recognize the socio-technical nature of innovation systems. Third, current institutions (rules, norms, and incentives) shaping technological innovation are often not aligned toward the goals of sustainable development because impoverished, marginalized, and unborn populations too often lack the economic and political power to shape innovation systems to meet their needs. However, these institutions can be reformed, and many actors have the power to do so through research, advocacy, training, convening, policymaking, and financing. We conclude with three practice-oriented recommendations to further realize the potential of innovation for sustainable development: (i) channels for regularized learning across domains of practice should be established; (ii) measures that systematically take into account the interests of underserved populations throughout the innovation process should be developed; and (iii) institutions should be reformed to reorient innovation systems toward sustainable development and ensure that all innovation stages and scales are considered at the outset.
Making technological innovation work for sustainable development
Anadon, Laura Diaz; Harley, Alicia G.; Matus, Kira; Moon, Suerie; Murthy, Sharmila L.
2016-01-01
This paper presents insights and action proposals to better harness technological innovation for sustainable development. We begin with three key insights from scholarship and practice. First, technological innovation processes do not follow a set sequence but rather emerge from complex adaptive systems involving many actors and institutions operating simultaneously from local to global scales. Barriers arise at all stages of innovation, from the invention of a technology through its selection, production, adaptation, adoption, and retirement. Second, learning from past efforts to mobilize innovation for sustainable development can be greatly improved through structured cross-sectoral comparisons that recognize the socio-technical nature of innovation systems. Third, current institutions (rules, norms, and incentives) shaping technological innovation are often not aligned toward the goals of sustainable development because impoverished, marginalized, and unborn populations too often lack the economic and political power to shape innovation systems to meet their needs. However, these institutions can be reformed, and many actors have the power to do so through research, advocacy, training, convening, policymaking, and financing. We conclude with three practice-oriented recommendations to further realize the potential of innovation for sustainable development: (i) channels for regularized learning across domains of practice should be established; (ii) measures that systematically take into account the interests of underserved populations throughout the innovation process should be developed; and (iii) institutions should be reformed to reorient innovation systems toward sustainable development and ensure that all innovation stages and scales are considered at the outset. PMID:27519800
Value innovation: an important aspect of global surgical care
2014-01-01
Introduction Limited resources in low- and middle-income countries (LMICs) drive tremendous innovation in medicine, as well as in other fields. It is not often recognized that several important surgical tools and methods, widely used in high-income countries, have their origins in LMICs. Surgical care around the world stands much to gain from these innovations. In this paper, we provide a short review of some of these succesful innovations and their origins that have had an important impact in healthcare delivery worldwide. Review Examples of LMIC innovations that have been adapted in high-income countries include the Bogotá bag for temporary abdominal wound closure, the orthopaedic external fixator for complex fractures, a hydrocephalus fluid valve for normal pressure hydrocephalus, and intra-ocular lens and manual small incision cataract surgery. LMIC innovations that have had tremendous potential global impact include mosquito net mesh for inguinal hernia repair, and a flutter valve for intercostal drainage of pneumothorax. Conclusion Surgical innovations from LMICs have been shown to have comparable outcomes at a fraction of the cost of tools used in high-income countries. These innovations have the potential to revolutionize global surgical care. Advocates should actively seek out these innovations, campaign for the financial gains from these innovations to benefit their originators and their countries, and find ways to develop and distribute them locally as well as globally. PMID:24393237
Value innovation: an important aspect of global surgical care.
Cotton, Michael; Henry, Jaymie Ang; Hasek, Lauren
2014-01-06
Limited resources in low- and middle-income countries (LMICs) drive tremendous innovation in medicine, as well as in other fields. It is not often recognized that several important surgical tools and methods, widely used in high-income countries, have their origins in LMICs. Surgical care around the world stands much to gain from these innovations. In this paper, we provide a short review of some of these successful innovations and their origins that have had an important impact in healthcare delivery worldwide. Examples of LMIC innovations that have been adapted in high-income countries include the Bogotá bag for temporary abdominal wound closure, the orthopaedic external fixator for complex fractures, a hydrocephalus fluid valve for normal pressure hydrocephalus, and intra-ocular lens and manual small incision cataract surgery. LMIC innovations that have had tremendous potential global impact include mosquito net mesh for inguinal hernia repair, and a flutter valve for intercostal drainage of pneumothorax. Surgical innovations from LMICs have been shown to have comparable outcomes at a fraction of the cost of tools used in high-income countries. These innovations have the potential to revolutionize global surgical care. Advocates should actively seek out these innovations, campaign for the financial gains from these innovations to benefit their originators and their countries, and find ways to develop and distribute them locally as well as globally.
Thinking Differently: Catalyzing Innovation in Healthcare and Beyond.
Samet, Kenneth A; Smith, Mark S
2016-01-01
Convenience, value, access, and choice have become the new expectations of consumers seeking care. Incorporating these imperatives and navigating an expanded competitive landscape are necessary for the success of healthcare organizations-today and in the future-and require thinking differently than in the past.Innovation must be a central strategy for clinical and business operations to be successful. However, the currently popular concept of innovation is at risk of losing its power and meaning unless deliberate and focused action is taken to define it, adopt it, embrace it, and embed it in an organization's culture. This article details MedStar Health's blueprint for establishing the MedStar Institute for Innovation (MI2), which involved recognizing the sharpened need for innovation, creating a single specific entity to catalyze innovation across the healthcare organization and community, discovering the untapped innovation energy already residing in its employee base, and moving nimbly into the white space of possibility.Drawing on MedStar's experience with MI2, we offer suggestions in the following areas for implementing an innovation institute in a large healthcare system:We offer healthcare and business leaders a playbook for identifying and unleashing innovation in their organizations, at a time when innovation is at an increased risk of being misunderstood or misdirected but remains absolutely necessary for healthcare systems and organizations to flourish in the future.
Tucker, Joseph D; Pan, Stephen W; Mathews, Allison; Stein, Gabriella; Bayus, Barry; Rennie, Stuart
2018-03-09
Crowdsourcing contests (also called innovation challenges, innovation contests, and inducement prize contests) can be used to solicit multisectoral feedback on health programs and design public health campaigns. They consist of organizing a steering committee, soliciting contributions, engaging the community, judging contributions, recognizing a subset of contributors, and sharing with the community. This scoping review describes crowdsourcing contests by stage, examines ethical problems at each stage, and proposes potential ways of mitigating risk. Our analysis was anchored in the specific example of a crowdsourcing contest that our team organized to solicit videos promoting condom use in China. The purpose of this contest was to create compelling 1-min videos to promote condom use. We used a scoping review to examine the existing ethical literature on crowdsourcing to help identify and frame ethical concerns at each stage. Crowdsourcing has a group of individuals solve a problem and then share the solution with the public. Crowdsourcing contests provide an opportunity for community engagement at each stage: organizing, soliciting, promoting, judging, recognizing, and sharing. Crowdsourcing poses several ethical concerns: organizing-potential for excluding community voices; soliciting-potential for overly narrow participation; promoting-potential for divulging confidential information; judging-potential for biased evaluation; recognizing-potential for insufficient recognition of the finalist; and sharing-potential for the solution to not be implemented or widely disseminated. Crowdsourcing contests can be effective and engaging public health tools but also introduce potential ethical problems. We present methods for the responsible conduct of crowdsourcing contests. ©Joseph D Tucker, Stephen W Pan, Allison Mathews, Gabriella Stein, Barry Bayus, Stuart Rennie. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 09.03.2018.
Interorganizational transfer of technology - A study of adoption of NASA innovations
NASA Technical Reports Server (NTRS)
Chakrabarti, A. K.; Rubenstein, A. H.
1976-01-01
The paper describes a study on the effects of top management support, various techno-economic factors, organizational climate, and decision-making modes on the adoption of NASA innovations. Field research consisted of interviews and questionnaires directed to sixty-five organizations. Forty-five test cases where different decisions for adoption of ideas for new products or processes were made on NASA Tech Briefs were studied in relation to the effects of various factors on the degree of success of adoption, including: (1) the degree of general connection of the technology to the firm's existing operation, (2) the specificity of the relationship between the technology and some existing and recognized problem, (3) the degree of urgency of the problem to which the technology was related, (4) maturity of technology available to implement the technology, (5) availability of personnel and financial resources to implement the technology, (6) degree of top management interest, (7) the use of confrontation in joint-decision, (8) the use of smoothing in decision-making, and (9) the use of forcing in decision-making. It was found that top managements interest was important in the product cases only, and that the success of process innovations was dependent on the quality of information and the specificity of the relationship between the technology and some recognized existing problem.
Centre of Excellence For Simulation Education and Innovation (CESEI).
Qayumi, A Karim
2010-01-01
Simulation is becoming an integral part of medical education. The American College of Surgeons (ACS) was the first organization to recognize the value of simulation-based learning, and to award accreditation for educational institutions that aim to provide simulation as part of the experiential learning opportunity. Centre of Excellence for Simulation Education and Innovation (CESEI) is a multidisciplinary and interprofessional educational facility that is based at the University of British Columbia (UBC) and Vancouver Costal Health Authority (VCH). Centre of Excellence for Simulation Education and Innovation's goal is to provide excellence in education, research, and healthcare delivery by providing a technologically advanced environment and learning opportunity using simulation for various groups of learners including undergraduate, postgraduate, nursing, and allied health professionals. This article is an attempt to describe the infrastructure, services, and uniqueness of the Centre of Excellence for Simulation Education and Innovation. Copyright 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Human Behavior and Cognition in Evolutionary Economics.
Nelson, Richard R
2011-12-01
My brand of evolutionary economics recognizes, highlights, that modern economies are always in the process of changing, never fully at rest, with much of the energy coming from innovation. This perspective obviously draws a lot from Schumpeter. Continuing innovation, and the creative destruction that innovation engenders, is driving the system. There are winners and losers in the process, but generally the changes can be regarded as progress. The processes through which economic activity and performance evolve has a lot in common with evolution in biology. In particular, at any time the economy is marked by considerable variety, there are selection forces winnowing on that variety, but also continuing emergence of new ways of doing things and often economic actors. But there also are important differences from biological evolution. In particular, both innovation and selection are to a considerable degree purposive activities, often undertaken on the basis of relatively strong knowledge.
Innovations 'Out of Place': Controversies Over IVF Beginnings in India Between 1978 and 2005.
Bärnreuther, Sandra
2016-01-01
In 1978, the year the first in vitro fertilization (IVF) baby was born in the United Kingdom, a research team in Kolkata reported that it too had successfully produced an IVF baby in India. However, the claim was dismissed at the time, because the experiment was conducted outside authorized institutions and recognized centers of innovation--in short, because it was an innovation 'out of place.' Tracing controversies over the case between 1978 and 2005, I show the importance of space or place in processes of knowledge production and recognition. Further, I explain the initial repudiation and subsequent partial recognition of the claim through shifts in the landscape of legitimate spaces of innovation. By discussing this specific case of the production of science and technology in the Global South, I challenge conventional narratives of diffusion that are prevalent in studies on the worldwide proliferation of reproductive technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Timothy J.
2016-03-01
While benchmarking software is useful for testing the performance limits and stability of Argonne National Laboratory’s new Theta supercomputer, there is no substitute for running real applications to explore the system’s potential. The Argonne Leadership Computing Facility’s Theta Early Science Program, modeled after its highly successful code migration program for the Mira supercomputer, has one primary aim: to deliver science on day one. Here is a closer look at the type of science problems that will be getting early access to Theta, a next-generation machine being rolled out this year.
Supercomputer analysis of sedimentary basins.
Bethke, C M; Altaner, S P; Harrison, W J; Upson, C
1988-01-15
Geological processes of fluid transport and chemical reaction in sedimentary basins have formed many of the earth's energy and mineral resources. These processes can be analyzed on natural time and distance scales with the use of supercomputers. Numerical experiments are presented that give insights to the factors controlling subsurface pressures, temperatures, and reactions; the origin of ores; and the distribution and quality of hydrocarbon reservoirs. The results show that numerical analysis combined with stratigraphic, sea level, and plate tectonic histories provides a powerful tool for studying the evolution of sedimentary basins over geologic time.
2017-12-08
The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
2017-12-08
The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
2017-12-08
The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
Development of the general interpolants method for the CYBER 200 series of supercomputers
NASA Technical Reports Server (NTRS)
Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.
1988-01-01
The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.
NASA Technical Reports Server (NTRS)
Nosenchuck, D. M.; Littman, M. G.
1986-01-01
The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.
Merging the Machines of Modern Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Laura; Collins, Jim
Two recent projects have harnessed supercomputing resources at the US Department of Energy’s Argonne National Laboratory in a novel way to support major fusion science and particle collider experiments. Using leadership computing resources, one team ran fine-grid analysis of real-time data to make near-real-time adjustments to an ongoing experiment, while a second team is working to integrate Argonne’s supercomputers into the Large Hadron Collider/ATLAS workflow. Together these efforts represent a new paradigm of the high-performance computing center as a partner in experimental science.
Innovative park-and-ride management for livable communities : final report.
DOT National Transportation Integrated Search
2015-08-31
Park-and-ride (P&R) has been recognized as an effective way to tackle the challenge of the : last-mile problem in public transportation, i.e., connecting transit stations to final destinations. : Although the design and operations of P&R facilities h...
Best Practices & Outstanding Initiatives
ERIC Educational Resources Information Center
Training, 2012
2012-01-01
In this article, "Training" editors recognize innovative and successful learning and development programs and practices submitted in the 2012 Training Top 125 application. Best practices: (1) Edward Jones: Practice Makes Perfect (sales training); (2) Grant Thornton LLP: Senior Manager Development Program (SMDP); (3) MetLife, Inc.: Top Advisor…
Best Practices & Outstanding Initiatives
ERIC Educational Resources Information Center
Training, 2011
2011-01-01
In this article, "Training" editors recognize innovative and successful learning and development programs and practices. They share best practices from Automatic Data Processing, Inc., Farmers Insurance Group, FedEx Express, InterContinental Hotels Group, and Oakwood Temporary Housing. They also present the outstanding initiatives of EMD Serono,…
ETV works in partnership with recognized standards and testing organizations and stakeholder groups consisting of regulators, buyers, and vendor organizations, with the full participation of individual technology developers. The program evaluates the performance of innovative
ETV works in partnership with recognized standards and testing organizations and stakeholder groups consisting of regulators, buyers, and vendor organizations, with the full participation of individual technology developers. The program evaluates the performance of innovative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monti, Henri; Butt, Ali R; Vazhkudai, Sudharshan S
2010-04-01
Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as possible makes the data vulnerable to storage failures, which may entail re-staging and consequently reduced job throughput. To address this, we present a timely staging framework that uses a combination of job startup time predictions, user-specified intermediate nodes, and decentralized data delivery to coincide input data staging with job start-up.more » By delaying staging to when it is necessary, the exposure to failures and its effects can be reduced. Evaluation using both PlanetLab and simulations based on three years of Jaguar (No. 1 in Top500) job logs show as much as 85.9% reduction in staging times compared to direct transfers, 75.2% reduction in wait time on scratch, and 2.4% reduction in usage/hour.« less
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Pope, Adrian; Finkel, Hal
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less
NASA's Participation in the National Computational Grid
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)
1998-01-01
Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.
Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît
2012-11-13
An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meneses, Esteban; Ni, Xiang; Jones, Terry R
The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of faultmore » tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.« less
Rosie Phillips Bingham: On Becoming
ERIC Educational Resources Information Center
Neville, Helen A.
2012-01-01
Rosie Phillips Bingham has contributed to the field of counseling psychology and the broader discipline of psychology in myriad ways. She is nationally recognized for her innovation, leadership skills, and fundraising capabilities. She is also known for her commitment to student development and her caring mentoring approach. In this life…
It Does Matter How We Teach Math
ERIC Educational Resources Information Center
Rodrigues, Kathleen J.
2012-01-01
This paper describes application of innovative practice and procedures in relationship to recognized principles and theory of adult education used in college math instruction. Adult learning principles provide the theoretical constructs and foundation of the practice supporting a learner-centered approach to learning. The purpose was to explore…
Technology Implementation in Education--Identifying Barriers to Fidelity
ERIC Educational Resources Information Center
Monroe, Arla K.; Dennis, William J.; Johnson, Daniel L.
2012-01-01
This report describes a problem-based learning project focused on determining the barriers to the implementation of technological innovations. that properly executed technology implementation is an instructional variable related to student achievement; yet, school district leaders are faced with the problem of recognizing and identifying the…
Preparing Students for Leadership through Experiential Learning
ERIC Educational Resources Information Center
Bauermeister, Maria C.; Greer, Jon; Kalinovich, Angelina V.; Marrone, Jennifer A.; Pahl, Megan M.; Rochholz, Lauren B.; Wilson, Barry R.
2016-01-01
This Application Brief highlights Seattle University's Red Winged Leadership (RWL) exercise, an innovative curriculum for graduate business leadership education. RWL requires students to apply course materials to a visible and challenging class project, and to critically examine and recognize leadership in the broader community. Both allow for…
Leading Generative Groups: A Conceptual Model
ERIC Educational Resources Information Center
London, Manuel; Sobel-Lojeski, Karen A.; Reilly, Richard R.
2012-01-01
This article presents a conceptual model of leadership in generative groups. Generative groups have diverse team members who are expected to develop innovative solutions to complex, unstructured problems. The challenge for leaders of generative groups is to balance (a) establishing shared goals with recognizing members' vested interests, (b)…
Teambuilding, Innovation and the Engineering Communication Interface
ERIC Educational Resources Information Center
Prescott, David; El-Sakran, Tharwat; Albasha, Lutfi; Aloul, Fadi; Al-Assaf, Yousef
2012-01-01
Recent engineering industry-based research has identified a number of skill deficiencies in graduating engineers. Emphasis on communication and teamwork informed by attributes of self management, problem solving and mutual accountability have been recognized as important needs by The Engineering Accreditation Commission of ABET of the United…
Innovative Employment Practices for Older Americans.
ERIC Educational Resources Information Center
Root, Lawrence S.; Zarrugh, Laura H.
Many companies recognize the importance of including older persons in the labor force, but barriers still exist that limit their productive employment. Negative stereotypes may influence hiring and promotion decisions, and training opportunities may be closed. A study was conducted of private sector employment programs/practices that are intended…
ERIC Educational Resources Information Center
Hill, Chrystle; Farkas, Meredith
2008-01-01
Each year, "Library Journal" recognizes 50 or so emerging leaders in the profession as Movers & Shakers. These library professionals are passionate about the work they do and are moving the profession forward, often in creative and innovative ways. Some of these individuals enjoyed and were encouraged by amazing institutional support and…
INNOVATIVE SCREENING TECHNOLOGIES FOR DIOXINS IN SOIL
Dioxins are recognized as one of the most pervasive and toxic class of chemicals in the environment. They have been the focus of various human exposure studies and have been found at numerous Superfund and other hazardous waste sites. The cost of dioxin analysis represents a s...
ETV works in partnership with recognized standards and testing organizations and stakeholder groups consisting of regulators, buyers, and vendor organizations, with the full participation of individual technology developers. The program evaluates the performance of innovative
ETV works in partnership with recognized standards and testing organizations and stakeholder groups consisting of regulators, buyers, and vendor organizations, with the full participation of individual technology developers. The program evaluates the performance of innovative
Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard
2014-01-01
Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less
Challenges in scaling NLO generators to leadership computers
NASA Astrophysics Data System (ADS)
Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.
2017-10-01
Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito
2016-11-15
A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Optical clock distribution in supercomputers using polyimide-based waveguides
NASA Astrophysics Data System (ADS)
Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.
1999-04-01
Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William
1986-01-01
The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.
Long-Term file activity patterns in a UNIX workstation environment
NASA Technical Reports Server (NTRS)
Gibson, Timothy J.; Miller, Ethan L.
1998-01-01
As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.
New Center Links Earth, Space, and Information Sciences
NASA Astrophysics Data System (ADS)
Aswathanarayana, U.
2004-05-01
Broad-based geoscience instruction melding the Earth, space, and information technology sciences has been identified as an effective way to take advantage of the new jobs created by technological innovations in natural resources management. Based on this paradigm, the University of Hyderabad in India is developing a Centre of Earth and Space Sciences that will be linked to the university's super-computing facility. The proposed center will provide the basic science underpinnings for the Earth, space, and information technology sciences; develop new methodologies for the utilization of natural resources such as water, soils, sediments, minerals, and biota; mitigate the adverse consequences of natural hazards; and design innovative ways of incorporating scientific information into the legislative and administrative processes. For these reasons, the ethos and the innovatively designed management structure of the center would be of particular relevance to the developing countries. India holds 17% of the world's human population, and 30% of its farm animals, but only about 2% of the planet's water resources. Water will hence constitute the core concern of the center, because ecologically sustainable, socially equitable, and economically viable management of water resources of the country holds the key to the quality of life (drinking water, sanitation, and health), food security, and industrial development of the country. The center will be focused on interdisciplinary basic and pure applied research that is relevant to the practical needs of India as a developing country. These include, for example, climate prediction, since India is heavily dependent on the monsoon system, and satellite remote sensing of soil moisture, since agriculture is still a principal source of livelihood in India. The center will perform research and development in areas such as data assimilation and validation, and identification of new sensors to be mounted on the Indian meteorological satellites to make measurements in those spectral bands and with those polarizations that are needed to address water resources management issues.
Mutual learning and reverse innovation--where next?
Crisp, Nigel
2014-03-28
There is a clear and evident need for mutual learning in global health systems. It is increasingly recognized that innovation needs to be sourced globally and that we need to think in terms of co-development as ideas are developed and spread from richer to poorer countries and vice versa. The Globalization and Health journal's ongoing thematic series, "Reverse innovation in global health systems: learning from low-income countries" illustrates how mutual learning and ideas about so-called "reverse innovation" or "frugal innovation" are being developed and utilized by researchers and practitioners around the world. The knowledge emerging from the series is already catalyzing change and challenging the status quo in global health. The path to truly "global innovation flow", although not fully established, is now well under way. Mobilization of knowledge and resources through continuous communication and awareness raising can help sustain this movement. Global health learning laboratories, where partners can support each other in generating and sharing lessons, have the potential to construct solutions for the world. At the heart of this dialogue is a focus on creating practical local solutions and, simultaneously, drawing out the lessons for the whole world.
Martin, Graham P; Weaver, Simon; Currie, Graeme; Finn, Rachael; McDonald, Ruth
2012-01-01
The need for organizational innovation as a means of improving health-care quality and containing costs is widely recognized, but while a growing body of research has improved knowledge of implementation, very little has considered the challenges involved in sustaining change – especially organizational change led ‘bottom-up’ by frontline clinicians. This study addresses this lacuna, taking a longitudinal, qualitative case-study approach to understanding the paths to sustainability of four organizational innovations. It highlights the importance of the interaction between organizational context, nature of the innovation and strategies deployed in achieving sustainability. It discusses how positional influence of service leads, complexity of innovation, networks of support, embedding in existing systems, and proactive responses to changing circumstances can interact to sustain change. In the absence of cast-iron evidence of effectiveness, wider notions of value may be successfully invoked to sustain innovation. Sustainability requires continuing effort through time, rather than representing a final state to be achieved. Our study offers new insights into the process of sustainability of organizational change, and elucidates the complement of strategies needed to make bottom-up change last in challenging contexts replete with competing priorities. PMID:23554445
Kiparsky, Michael; Sedlak, David L; Thompson, Barton H; Truffer, Bernhard
2013-08-01
Interaction between institutional change and technological change poses important constraints on transitions of urban water systems to a state that can meet future needs. Research on urban water and other technology-dependent systems provides insights that are valuable to technology researchers interested in assuring that their efforts will have an impact. In the context of research on institutional change, innovation is the development, application, diffusion, and utilization of new knowledge and technology. This definition is intentionally inclusive: technological innovation will play a key role in reinvention of urban water systems, but is only part of what is necessary. Innovation usually depends on context, such that major changes to infrastructure include not only the technological inventions that drive greater efficiencies and physical transformations of water treatment and delivery systems, but also the political, cultural, social, and economic factors that hinder and enable such changes. On the basis of past and present changes in urban water systems, institutional innovation will be of similar importance to technological innovation in urban water reinvention. To solve current urban water infrastructure challenges, technology-focused researchers need to recognize the intertwined nature of technologies and institutions and the social systems that control change.
Opportunities for leveraging OS virtualization in high-end supercomputing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke
2010-11-01
This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.
Designing a connectionist network supercomputer.
Asanović, K; Beck, J; Feldman, J; Morgan, N; Wawrzynek, J
1993-12-01
This paper describes an effort at UC Berkeley and the International Computer Science Institute to develop a supercomputer for artificial neural network applications. Our perspective has been strongly influenced by earlier experiences with the construction and use of a simpler machine. In particular, we have observed Amdahl's Law in action in our designs and those of others. These observations inspire attention to many factors beyond fast multiply-accumulate arithmetic. We describe a number of these factors along with rough expressions for their influence and then give the applications targets, machine goals and the system architecture for the machine we are currently designing.
Building black holes: supercomputer cinema.
Shapiro, S L; Teukolsky, S A
1988-07-22
A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.
Supercomputer analysis of purine and pyrimidine metabolism leading to DNA synthesis.
Heinmets, F
1989-06-01
A model-system is established to analyze purine and pyrimidine metabolism leading to DNA synthesis. The principal aim is to explore the flow and regulation of terminal deoxynucleoside triophosphates (dNTPs) in various input and parametric conditions. A series of flow equations are established, which are subsequently converted to differential equations. These are programmed (Fortran) and analyzed on a Cray chi-MP/48 supercomputer. The pool concentrations are presented as a function of time in conditions in which various pertinent parameters of the system are modified. The system is formulated by 100 differential equations.
Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2017-01-01
Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.
Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks
NASA Technical Reports Server (NTRS)
Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias;
2006-01-01
The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.
Emerging Communities of Practice
ERIC Educational Resources Information Center
McAlister, Martha
2016-01-01
Communities of practice are emerging as an innovative approach to faculty development. While collaborative learning is becoming popular in the classroom, autonomy and individualism continue to dominate the culture of higher education for faculty. However, as we begin to recognize that old solutions to new problems are no longer effective, there is…
Recognizing Career Academy Innovation
ERIC Educational Resources Information Center
Carrier, Cheryl
2007-01-01
For America to stay competitive, the public education system must be transformed to better meet the needs of a changing economy. For this to be achieved, schools, universities, businesses and government must work together to ensure success for all students. To make learning relevant, all must create links between schools and business, between the…
Prevention of Dating Violence on College Campuses: An Innovative Program
ERIC Educational Resources Information Center
Schwartz, Jonathan P.; Griffin, Linda D.; Russell, Melani M.; Frontaura-Duck, Sarannette
2006-01-01
Dating violence is a significant problem on college campuses that requires preventive interventions. In addition, sexist and stereotypical attitudes that support abusive dating behavior have been recognized as potential risk factors. Previous research has found that fraternity and sorority membership is related to stereotypical beliefs concerning…
Alternative Fuels Data Center: Federal Laws and Incentives for Natural Gas
emissions. Grants are available to states, non-profits, and academic institutions to demonstrate innovative from diesel engines through the implementation of varied control strategies and the involvement of are school districts, state and local government programs, federally recognized Indian tribes, and non
Sanford Prescribed Fire Review
Scott Conroy; Jim Saveland; Mark Beighley; John Shive; Joni Ward; Marcus Trujillo; Paul Keller
2003-01-01
The Dixie National Forest has a long-standing history of successfully implementing prescribed fire and suppression programs. The Forest's safety record has been exemplary. The Forest is known Region-wide for its aggressive and innovative prescribed fire program. In particular, the Dixie National Forest is recognized for its leadership in introducing landscape-...
Innovative market mechanisms are being increasingly recognized as effective decision-making institutions to incorporate the value of ecosystem services into the economy. We present a field experiment that integrates an economic auction and a biophysical water flux model to develo...
U.S. History Framework for the 2010 National Assessment of Educational Progress
ERIC Educational Resources Information Center
National Assessment Governing Board, 2009
2009-01-01
This framework identifies the main ideas, major events, key individuals, and unifying themes of American history as a basis for preparing the 2010 assessment. The framework recognizes that U.S. history includes powerful ideas, common and diverse traditions, economic developments, technological and scientific innovations, philosophical debates,…
Essential Tension: Specialization with Broad and General Training in Psychology
ERIC Educational Resources Information Center
Roberts, Michael C.
2006-01-01
The practice fields of psychology develop through specialization in training and education. The recognized specialties play a major role in developing new opportunities for professional psychology and providing quality services for the public. The essential tension comes from the balance of innovation and tradition and, in professional psychology,…
Accountability Pillar Results for Annual Education Results Report (AERR)
ERIC Educational Resources Information Center
Alberta Education, 2008
2008-01-01
Alberta has developed an innovative new way of measuring performance to ensure we continue to provide the best possible education opportunities for all of our students. This accountability framework, called the Accountability Pillar, recognizes and respects the outstanding work seen in our school authorities every day. It ensures school…
University-Industry Research Collaboration: A Model to Assess University Capability
ERIC Educational Resources Information Center
Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia
2011-01-01
Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…
Digital Badges for Staff Training: Motivate Employees to Learn with Micro-Credentials
ERIC Educational Resources Information Center
Copenhaver, Kimberly; Pritchard, Liz
2017-01-01
Integrating micro-credentialing into employee training programs offers libraries an innovative and individualized way to recognize and certify learning and achievement. Digital badges provide a low-cost initiative to support learning benefiting both the individual and institution, offering evidence of skill development that transcends the library…
32 CFR 206.1 - Major characteristics of the NSEP institutional grants program.
Code of Federal Regulations, 2012 CFR
2012-07-01
... issues of national capacity; and (3) Defines innovative approaches to issues not addressed by NSEP... base capacity currently exists in some foreign languages and area studies. It also recognizes that... cases where the demand cannot be met and encourages efforts that increase demand. (5) NSEP encourages...
32 CFR 206.1 - Major characteristics of the NSEP institutional grants program.
Code of Federal Regulations, 2014 CFR
2014-07-01
... issues of national capacity; and (3) Defines innovative approaches to issues not addressed by NSEP... base capacity currently exists in some foreign languages and area studies. It also recognizes that... cases where the demand cannot be met and encourages efforts that increase demand. (5) NSEP encourages...
32 CFR 206.1 - Major characteristics of the NSEP institutional grants program.
Code of Federal Regulations, 2013 CFR
2013-07-01
... issues of national capacity; and (3) Defines innovative approaches to issues not addressed by NSEP... base capacity currently exists in some foreign languages and area studies. It also recognizes that... cases where the demand cannot be met and encourages efforts that increase demand. (5) NSEP encourages...
32 CFR 206.1 - Major characteristics of the NSEP institutional grants program.
Code of Federal Regulations, 2011 CFR
2011-07-01
... issues of national capacity; and (3) Defines innovative approaches to issues not addressed by NSEP... base capacity currently exists in some foreign languages and area studies. It also recognizes that... cases where the demand cannot be met and encourages efforts that increase demand. (5) NSEP encourages...
32 CFR 206.1 - Major characteristics of the NSEP institutional grants program.
Code of Federal Regulations, 2010 CFR
2010-07-01
... issues of national capacity; and (3) Defines innovative approaches to issues not addressed by NSEP... base capacity currently exists in some foreign languages and area studies. It also recognizes that... cases where the demand cannot be met and encourages efforts that increase demand. (5) NSEP encourages...
Popular Science Recognizes Innovative Solar Technologies
photovoltaic (solar electric) modules to produce standard household current are listed among the magazine's photovoltaic module that produces standard household, or alternating current (AC). Ascension Technology's SunSineTM 300 AC photovoltaic module has a built-in microinverter that eliminates the need for direct
The School Administrator Payoff from Teacher Pensions
ERIC Educational Resources Information Center
Koedel, Cory; Ni, Shawn; Podgursky, Michael
2013-01-01
It is widely recognized that teacher quality is the central input in school performance. This insight has put human resource and compensation policies, including performance pay, tenure, alternative route recruitment, and mentoring, at center stage in school reform debates. Some school administrators have been innovators and reform leaders in…
ETV works in partnership with recognized standards and testing organizations and stakeholder groups consisting of regulators, buyers, and vendor organizations, with the full participation of individual technology developers. The program evaluates the performance of innovative
Standard Systems Group (SSG) Technology Adoption Planning Workshop
2004-04-01
11 Figure 2: Map of SEI Technologies Against SSG (Cluster Focused on Customer Issues...them could be consolidated. The objectives were grouped into three categories ( customer focused, internal operations, and innovation & learning... customers ! • Streamlined organization with agile processes • Recognized expertise in exploring and exploiting leading IT technologies • Enterprise
Nourishing STEM Student Success via a TEAM-Based Advisement Model
ERIC Educational Resources Information Center
Polnarieve, Barnard A.; Jaafar, Reem; Hendrix, Tonya; Morgan, Holly Porter; Khethavath, Praveen; Idrissi, Abderrazak Belkharraz
2017-01-01
LaGuardia Community College is an international leader recognized for developing and successfully implementing initiatives and educating underserved diverse students. LaGuardia's STEM students are holistically advised by a team of dedicated faculty and staff members from different departments and divisions. As an innovative approach to advisement,…
Robots and service innovation in health care.
Oborn, Eivor; Barrett, Michael; Darzi, Ara
2011-01-01
Robots have long captured our imagination and are being used increasingly in health care. In this paper we summarize, organize and criticize the health care robotics literature and highlight how the social and technical elements of robots iteratively influence and redefine each other. We suggest the need for increased emphasis on sociological dimensions of using robots, recognizing how social and work relations are restructured during changes in practice. Further, we propose the usefulness of a 'service logic' in providing insight as to how robots can influence health care innovation. The Royal Society of Medicine Press Ltd 2011.
From novelty to the every-day: the evolution of ureteroscopy.
Ridyard, Douglas; Dagrosa, Lawrence; Pais, Vernon M
2016-12-01
Ureteroscopy revolutionized the surgical approach to the upper urinary tract, and is well recognized as a cornerstone of modern urology. Although now commonplace, ureteroscopic equipment and techniques were truly revolutionary. A review of the innovations and innovators that developed ureteroscopic surgery sets the stage for a more thorough understanding of what can be done ureteroscopically, and may additionally better inform what limitations remain. Given that future advancements in urologic therapy will be dependent upon a similar pursuit of paradigm shifting improvements in disease management, an overview of the development of modern ureteroscopy may inspire such change.
Crossing and creating boundaries in healthcare innovation.
Ingerslev, Karen
2016-06-20
Purpose - This paper reports from a qualitative case study of a change initiative undertaken in a Danish public hospital setting during national healthcare reforms. The purpose of this paper is to challenge understandings of innovations as defined by being value-adding per se. Whether the effects of attempting to innovate are positive or negative is in this paper regarded as a matter of empirical investigation. Design/methodology/approach - Narrative accounts of activities during the change initiative are analysed in order to elucidate the effects of framing the change initiative as innovation on which boundaries are created and crossed. Findings - Framing change initiatives as innovation leads to intended as well as unanticipated boundary crossings where healthcare practitioners from different organizations recognize a shared problem and task. It also leads to unintended boundary reinforcements between "us and them" that may exclude the perspectives of patients or stakeholders when confronting complex problems in healthcare. This boundary reinforcement can lead to further fragmentation of healthcare despite the stated intention to create more integrated services. Practical implications - The paper suggests that researchers as well as practitioners should not presume that intentions to innovate will by themselves enhance creativity and innovation. When analysing the intended, unintended as well as unanticipated consequences of framing change initiatives as innovation, researchers and practitioner gain nuanced knowledge about the effects of intending to innovate in complex settings such as healthcare. Originality/value - This paper suggests the need for an analytical move from studying the effects of innovation to studying the effects of framing complex problems as a call for innovation.
Health Systems Innovation at Academic Health Centers: Leading in a New Era of Health Care Delivery.
Ellner, Andrew L; Stout, Somava; Sullivan, Erin E; Griffiths, Elizabeth P; Mountjoy, Ashlin; Phillips, Russell S
2015-07-01
Challenged by demands to reduce costs and improve service delivery, the U.S. health care system requires transformational change. Health systems innovation is defined broadly as novel ideas, products, services, and processes-including new ways to promote healthy behaviors and better integrate health services with public health and other social services-which achieve better health outcomes and/or patient experience at equal or lower cost. Academic health centers (AHCs) have an opportunity to focus their considerable influence and expertise on health systems innovation to create new approaches to service delivery and to nurture leaders of transformation. AHCs have traditionally used their promotions criteria to signal their values; creating a health systems innovator promotion track could be a critical step towards creating opportunities for innovators in academic medicine. In this Perspective, the authors review publicly available promotions materials at top-ranked medical schools and find that while criteria for advancement increasingly recognize systems innovation, there is a lack of specificity on metrics beyond the traditional yardstick of peer-reviewed publications. In addition to new promotions pathways and alternative evidence for the impact of scholarship, other approaches to fostering health systems innovation at AHCs include more robust funding for career development in health systems innovation, new curricula to enable trainees to develop skills in health systems innovation, and new ways for innovators to disseminate their work. AHCs that foster health systems innovation could meet a critical need to contribute both to the sustainability of our health care system and to AHCs' continued leadership role within it.
[Innovative care and self-care strategies for people with chronic diseases in Latin America].
Sapag, Jaime C; Lange, Ilta; Campos, Solange; Piette, John D
2010-01-01
To identify innovative strategies for improved care and self-care of patients with chronic diseases (CD) in Latin America and to explore interest in creating a Latin American network of professionals in this field. A descriptive study based on a survey of key experts with recognized national or regional leadership in CD patient care. The 25-question questionnaire sought information on their experiences with care and self-care initiatives for CD patients, descriptions of successful initiatives, the perceived ability of countries to innovate in this area, their interest in participating in a network of Latin American professionals in this field, and more. Content analysis was performed to develop recommendations for the Region. Responses were obtained from 17 (37.8%) of the 45 experts approached; 82.4% confirmed their knowledge of of involvement with an innovative initiative related to the subject. Initial development does exist in each of the three innovative strategy types: peer care, informal caregivers, and telenursing, the latter being the least explored. There is real interest in forming a Latin American network that focuses on development of innovative self-care strategies for CD patients. Support for a joint network is promising. Priorities are building skills in this area and developing innovative proposals for improved CD patient care in the Region. Innovative measures should be complementary and adapted to the specific context of each scenario.
Spatiotemporal modeling of node temperatures in supercomputers
Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...
2016-06-10
Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less
Integration of PanDA workload management system with Titan supercomputer at OLCF
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Nguyen, A; Yosinski, J; Clune, J
2016-01-01
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.
Exemplary Teacher Initiated Programs. Lucretia Crocker Fellows 1986-1987.
ERIC Educational Resources Information Center
Massachusetts State Dept. of Education, Boston.
The Lucretia Crocker Program recognizes the importance and potential impact of teacher-initiated innovation in public schools. The program is named for the woman who set up educational programs for emancipated blacks after the Civil War and promoted educational opportunity for blacks and women. Brief descriptions are given of the following…
The U.S. Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) has long recognized the need for research and development in the area of drinking water and wastewater infrastructure. Most recently in support of the Agency’s Sustainable Water Infrastructu...
The U.S. Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) has long recognized the need for research and development in the area of drinking water and wastewater infrastructure. Most recently in support of the Agency’s Sustainable Water ...
ERIC Educational Resources Information Center
O'Hara, Lyndsay; Bryce, Elizabeth Ann; Scharf, Sydney; Yassi, Annalee
2012-01-01
A user-friendly, high quality workplace assessment field guide and an accompanying worksheet are invaluable tools for recognizing hazards in the hospital environment. These tools ensure that both front line workers as well as health and safety and infection control professionals can systematically evaluate hazards and formulate recommendations.…
Sprint's Social Media Ninja Program: A Model for Teaching Consumer Relations
ERIC Educational Resources Information Center
Gilliland, Rebecca A.
2017-01-01
This study reviews the application of a new training model, Sprint's Social Media Ninja program, an innovative approach to using new media to initiate change. Sprint recognized change management must occur from employee ambassadors to relevant audiences including consumers and other employees. By teaching volunteer employees the strategic message…
ERIC Educational Resources Information Center
Smith, Abby
2004-01-01
Recent innovations in information technology have met, and in many cases, exceeded all but the most utopian visions for increasing access to research materials. Although the costs of the technology are high and the risks of losing access to digital information in the future because of hardware and software obsolescence are now widely recognized,…
Pivotal Response Treatments for Autism: Communication, Social, and Academic Development
ERIC Educational Resources Information Center
Koegel, Robert L.; Kern Koegel, Lynn
2006-01-01
Recognized as one of the top state-of-the-art treatments for autism in the United States, the innovative Pivotal Response Treatment uses natural learning opportunities to target and modify key behaviors in children with autism, leading to widespread positive effects on communication, behavior, and social skills. The product of 20 years of…
Implementation and Outcome Evaluation of the Intensive Aftercare Program. Final Report
ERIC Educational Resources Information Center
Wiebush, Richard G.; Wagner, Dennis; McNulty, Betsie; Wang, Yanqing; Le, Thao N.
2005-01-01
The Office of Juvenile Justice and Delinquency Prevention?s (OJJDP?s) intensive community based aftercare research and demonstration project known as the Intensive Aftercare Program (IAP) has become widely recognized as one of the most promising recent innovations in juvenile justice. The project has called attention to an area that traditionally…
Fueling the Engine: Smarter, Better Ways to Fund Education Innovators
ERIC Educational Resources Information Center
Hess, Frederick M.
2010-01-01
In "Education Unbound: The Promise and Practice of Greenfield Schooling," this author argued for new education service-delivery organizations that, free from the constricting norms and rules of traditional providers, focused single-mindedly on executing their model. The challenge for reformers is to recognize that enabling such providers is not…
ERIC Educational Resources Information Center
Buckner, Elizabeth; Kim, Paul
2012-01-01
Prior research suggests that exposure to conflict can negatively impact the development of executive functioning, which in turn can affect academic performance. Recognizing the need to better understand the potentially widespread executive function deficiencies among Palestinian students and to help develop educational resources targeted to youth…
Technology's Impact on the Creative Potential of Youth
ERIC Educational Resources Information Center
Rubin, Jim
2012-01-01
The importance of educating students to think critically and creatively was recognized over 2,000 years ago by Socrates, reworked in the 1950s by Benjamin Bloom, and reinforced by many modern-day educators. With changes in lifestyle brought on by innovations in digital technology, teachers, administrators, and parents alike are questioning the…
NREL: News - Students Recognized for Creativity during Energy Education
Middle School, third place; Horizon Community Middle School's car, "Bueblur," fourth place; and technology, craftsmanship and innovation were given to teams from Moore Middle School, first place; Manning Army, Xcel Energy and the Kaiser-Hill Company. As part of the K'NEX Design Contest, nine middle-school
Succeeding with Struggling Students: A Planning Resource for Raising Achievement
ERIC Educational Resources Information Center
Richardson, Marti T.
2006-01-01
Schools today are working harder than ever to help ensure that "all" children "can" learn and achieve high standards. Marti Richardson, a recognized leader in professional and curriculum development, delivers an innovative, classroom-tested program with planning tools to customize it for any school or district's data-based needs. Designed around a…
ERIC Educational Resources Information Center
Phillips, Michelle; St. John, Mark
2013-01-01
In 2009, the National Science Foundation funded the "Dynabook: A Digital Resource and Preservice Model for Developing TPCK" project through its Discovery Research K-12 program. Dynabook project leaders and the National Science Foundation (NSF) recognized that digital textbooks would soon be a primary instructional resource, and seized…
Creating Resiliency and Pathways to Opportunity. Strategies for Transformative Change
ERIC Educational Resources Information Center
Powell, M.; Hatch, M. A.; Fians, E.; Shinert, A.; Richie, D.
2016-01-01
Like many colleges funded by the U.S. Department of Labor's TAACCCT program, the goal of the Northeast Resiliency Consortium (NRC) (a Round Three grantee) was to enhance the capacity of colleges to accelerate learning, ensure that students attain industry-recognized credentials, foster innovative employer partnerships, use new technologies, and…
2009 Community College Futures Assembly Focus: Leading Change--Leading in an Uncertain Environment
ERIC Educational Resources Information Center
Campbell, Dale F.; Morris, Phillip A.
2009-01-01
The Community College Futures Assembly has served as a national, independent policy thinktank since 1995. Its purpose is to articulate the critical issues facing American community colleges and recognize innovative programs. Convening annually in January in Orlando, Florida, the Assembly offers a learning environment where tough questions are…
Delivering Higher Education to Adults: An Interview with Robert Mendenhall
ERIC Educational Resources Information Center
Finney, Joni E.
2012-01-01
This article presents an interview with Robert Mendenhall, president of Western Governors University, who is the 2012 recipient of the Virginia B. Smith (VBS) Innovative Leadership Award. The annual award recognizes his leadership in redesigning higher education delivery for adult students. In the interview, Robert Mendenhall talks about his work…
Retaining Intellectual Capital in U.S. Organizations: An Exploratory Study
ERIC Educational Resources Information Center
Taylor, Bobby
2017-01-01
Intellectual capital (IC) is a vital to the functionality of information technology (IT) businesses. Many companies recognize that enhancing and maintaining IC is critical to sustainability. The problem is that Fortune 500 IT businesses lack human resources in the United States needed for innovative development, resulting in an overreliance on…
ERIC Educational Resources Information Center
Gray, Patrick
The Dade County (Florida) Public School System is replacing its Teacher Assessment and Development System (TADS), initiated in 1982, with systems that draw on recent research, link teaching skills to school improvement, and recognize and reward advanced pedagogy. The comprehensive approach will integrate subordinate and peer assessment through a…
Primary Geography in the Republic of Ireland: Practices, Issues and Possible Futures
ERIC Educational Resources Information Center
Pike, Susan
2015-01-01
In the Republic of Ireland, geography is recognized as an important subject for children to learn and all pupils take it throughout their primary school years. The current curriculum, the Primary School Curriculum-Geography, follows a tradition of innovative, child-centered geography curricula in Ireland. This article outlines the history of…
ThinkeringSpace: Designing for Collaboration
ERIC Educational Resources Information Center
Moura, Heloisa; Fahnstrom, Dale; Prygrocki, Greg; McLeish, T. J.
2009-01-01
Innovation, collaboration and system thinking are increasingly recognized as skills that can be useful to children, and that can help ensure their success as citizens and workers in the 21st century Seeking to improve opportunities for young people to develop abilities and competencies for the future and to narrow the complexity gap left by No…
Monitoring Object Library Usage and Changes
NASA Technical Reports Server (NTRS)
Owen, R. K.; Craw, James M. (Technical Monitor)
1995-01-01
The NASA Ames Numerical Aerodynamic Simulation program Aeronautics Consolidated Supercomputing Facility (NAS/ACSF) supercomputing center services over 1600 users, and has numerous analysts with root access. Several tools have been developed to monitor object library usage and changes. Some of the tools do "noninvasive" monitoring and other tools implement run-time logging even for object-only libraries. The run-time logging identifies who, when, and what is being used. The benefits are that real usage can be measured, unused libraries can be discontinued, training and optimization efforts can be focused at those numerical methods that are actually used. An overview of the tools will be given and the results will be discussed.
Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.
Doyle-Lindrud, Susan
2015-02-01
IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.
Particle simulation on heterogeneous distributed supercomputers
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Dagum, Leonardo
1993-01-01
We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.
The transition of a real-time single-rotor helicopter simulation program to a supercomputer
NASA Technical Reports Server (NTRS)
Martinez, Debbie
1995-01-01
This report presents the conversion effort and results of a real-time flight simulation application transition to a CONVEX supercomputer. Enclosed is a detailed description of the conversion process and a brief description of the Langley Research Center's (LaRC) flight simulation application program structure. Currently, this simulation program may be configured to represent Sikorsky S-61 helicopter (a five-blade, single-rotor, commercial passenger-type helicopter) or an Army Cobra helicopter (either the AH-1 G or AH-1 S model). This report refers to the Sikorsky S-61 simulation program since it is the most frequently used configuration.
Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome
2014-04-25
In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.
Sequence search on a supercomputer.
Gotoh, O; Tagashira, Y
1986-01-10
A set of programs was developed for searching nucleic acid and protein sequence data bases for sequences similar to a given sequence. The programs, written in FORTRAN 77, were optimized for vector processing on a Hitachi S810-20 supercomputer. A search of a 500-residue protein sequence against the entire PIR data base Ver. 1.0 (1) (0.5 M residues) is carried out in a CPU time of 45 sec. About 4 min is required for an exhaustive search of a 1500-base nucleotide sequence against all mammalian sequences (1.2M bases) in Genbank Ver. 29.0. The CPU time is reduced to about a quarter with a faster version.
A high performance linear equation solver on the VPP500 parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi
1994-12-31
This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.
Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers
NASA Technical Reports Server (NTRS)
Lind, Rick; Balas, Gary J.
1995-01-01
This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
Transferring ecosystem simulation codes to supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1995-01-01
Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.
Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; Leinweber, David; Ruebel, Oliver
2011-09-16
This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports tomore » slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.« less
NASA Technical Reports Server (NTRS)
Cohen, Jarrett
1999-01-01
Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.
Compute Server Performance Results
NASA Technical Reports Server (NTRS)
Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)
1994-01-01
Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moniz, Ernest; Carr, Alan; Bethe, Hans
The Trinity Test of July 16, 1945 was the first full-scale, real-world test of a nuclear weapon; with the new Trinity supercomputer Los Alamos National Laboratory's goal is to do this virtually, in 3D. Trinity was the culmination of a fantastic effort of groundbreaking science and engineering by hundreds of men and women at Los Alamos and other Manhattan Project sites. It took them less than two years to change the world. The Laboratory is marking the 70th anniversary of the Trinity Test because it not only ushered in the Nuclear Age, but with it the origin of today’s advancedmore » supercomputing. We live in the Age of Supercomputers due in large part to nuclear weapons science here at Los Alamos. National security science, and nuclear weapons science in particular, at Los Alamos National Laboratory have provided a key motivation for the evolution of large-scale scientific computing. Beginning with the Manhattan Project there has been a constant stream of increasingly significant, complex problems in nuclear weapons science whose timely solutions demand larger and faster computers. The relationship between national security science at Los Alamos and the evolution of computing is one of interdependence.« less
Improving Memory Error Handling Using Linux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less
NASA Astrophysics Data System (ADS)
Schaaf, Kjeld; Overeem, Ruud
2004-06-01
Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.
Moniz, Ernest; Carr, Alan; Bethe, Hans; Morrison, Phillip; Ramsay, Norman; Teller, Edward; Brixner, Berlyn; Archer, Bill; Agnew, Harold; Morrison, John
2018-01-16
The Trinity Test of July 16, 1945 was the first full-scale, real-world test of a nuclear weapon; with the new Trinity supercomputer Los Alamos National Laboratory's goal is to do this virtually, in 3D. Trinity was the culmination of a fantastic effort of groundbreaking science and engineering by hundreds of men and women at Los Alamos and other Manhattan Project sites. It took them less than two years to change the world. The Laboratory is marking the 70th anniversary of the Trinity Test because it not only ushered in the Nuclear Age, but with it the origin of todayâs advanced supercomputing. We live in the Age of Supercomputers due in large part to nuclear weapons science here at Los Alamos. National security science, and nuclear weapons science in particular, at Los Alamos National Laboratory have provided a key motivation for the evolution of large-scale scientific computing. Beginning with the Manhattan Project there has been a constant stream of increasingly significant, complex problems in nuclear weapons science whose timely solutions demand larger and faster computers. The relationship between national security science at Los Alamos and the evolution of computing is one of interdependence.
KNBD: A Remote Kernel Block Server for Linux
NASA Technical Reports Server (NTRS)
Becker, Jeff
1999-01-01
I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.
The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris
2015-08-01
Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less
An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.
Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei
2017-12-01
Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less
Tseng, Jocelyn; Samagh, Sonia; Fraser, Donna; Landman, Adam B
2018-06-01
Despite considerable investment in digital health (DH) companies and a growing DH ecosystem, there are multiple challenges to testing and implementing innovative solutions. Health systems have recognized the potential of DH and have formed DH innovation centers. However, limited information is available on DH innovation center processes, best practices, or outcomes. This case report describes a DH innovation center process that can be replicated across health systems and defines and benchmarks process indicators to assess DH innovation center performance. The Brigham and Women's Hospital's Digital Health Innovation Group (DHIG) accelerates DH innovations from idea to pilot safely and efficiently using a structured process. Fifty-four DH innovations were accelerated by the DHIG process between July 2014 and December 2016. In order to measure effectiveness of the DHIG process, key process indicators were defined as 1) number of solutions that completed each DHIG phase and 2) length of time to complete each phase. Twenty-three DH innovations progressed to pilot stage and 13 innovations were terminated after barriers to pilot implementation were identified by the DHIG process. For 4 DH solutions that executed a pilot, the average time for innovations to proceed from DHIG intake to pilot initiation was 9 months. Overall, the DHIG is a reproducible process that addresses key roadblocks in DH innovation within health systems. To our knowledge, this is the first report to describe DH innovation process indicators and results within an academic health system. Therefore, there is no published data to compare our results with the results of other DH innovation centers. Standardized data collection and indicator reporting could allow benchmark comparisons across institutions. Additional opportunities exist for the validation of DH solution effectiveness and for translational support from pilot to implementation. These are critical steps to advance DH technologies and effectively leverage the DH ecosystem to transform healthcare. Copyright © 2017 Elsevier Inc. All rights reserved.
A Nurse-Led Innovation in Education: Implementing a Collaborative Multidisciplinary Grand Rounds.
Matamoros, Lisa; Cook, Michelle
2017-08-01
Multidisciplinary grand rounds provides an opportunity to promote excellence in patient care through scholarly presentations and interdisciplinary collaboration with an innovative approach. In addition, multidisciplinary grand rounds serves to recognize expertise of staff, mentor and support professional development, and provide a collaborative environment across all clinical disciplines and support services. This article describes a process model developed by nurse educators for implementing a multidisciplinary grand rounds program. The components of the process model include topic submissions, coaching presenters, presentations, evaluations, and spreading the work. This model can be easily implemented at any organization. J Contin Educ Nurs. 2017;48(8):353-357. Copyright 2017, SLACK Incorporated.
Seismic signal processing on heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas
2015-04-01
The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.
Kiparsky, Michael; Sedlak, David L.; Thompson, Barton H.; Truffer, Bernhard
2013-01-01
Abstract Interaction between institutional change and technological change poses important constraints on transitions of urban water systems to a state that can meet future needs. Research on urban water and other technology-dependent systems provides insights that are valuable to technology researchers interested in assuring that their efforts will have an impact. In the context of research on institutional change, innovation is the development, application, diffusion, and utilization of new knowledge and technology. This definition is intentionally inclusive: technological innovation will play a key role in reinvention of urban water systems, but is only part of what is necessary. Innovation usually depends on context, such that major changes to infrastructure include not only the technological inventions that drive greater efficiencies and physical transformations of water treatment and delivery systems, but also the political, cultural, social, and economic factors that hinder and enable such changes. On the basis of past and present changes in urban water systems, institutional innovation will be of similar importance to technological innovation in urban water reinvention. To solve current urban water infrastructure challenges, technology-focused researchers need to recognize the intertwined nature of technologies and institutions and the social systems that control change. PMID:23983450
Layton, Natasha; Murphy, Caitlin; Bell, Diane
2018-07-01
Assistive technology (AT) is an essential facilitator of independence and participation, both for people living with the effects of disability and/or non-communicable disease, as well as people aging with resultant functional decline. The World Health Organization (WHO) recognizes the substantial gap between the need for and provision of AT and is leading change through the Global Cooperation on Assistive Technology (GATE) initiative. Showcasing innovations gathered from 92 global researchers, innovators, users and educators of AT through the WHO GREAT Summit, this article provides an analysis of ideas and actions on a range of dimensions in order to provide a global overview of AT innovation. The accessible method used to capture and showcase this data is presented and critiqued, concluding that "innovation snapshots" are a rapid and concise strategy to capture and showcase AT innovation and to foster global collaboration. Implications for Rehabilitation Focal tools such as ePosters with uniform data requirements enable the rapid sharing of information. A diversity of innovative practices are occurring globally in the areas of AT Products, Policy, Provision, People and Personnel. The method offered for Innovation Snapshots had substantial uptake and is a feasible means to capture data across a range of stakeholders. Meeting accessibility criteria is an emerging competency in the AT community. Substantial areas of common interest exist across regions and globally in the AT community, demonstrating the effectiveness of information sharing platforms such as GATE and supporting the idea of regional forums and networks.
Absorptive capacity, technological innovation, and product life cycle: a system dynamics model.
Zou, Bo; Guo, Feng; Guo, Jinyu
2016-01-01
While past research has recognized the importance of the dynamic nature of absorptive capacity, there is limited knowledge on how to generate a fair and comprehensive analytical framework. Based on interviews with 24 Chinese firms, this study develops a system-dynamics model that incorporates an important feedback loop among absorptive capacity, technological innovation, and product life cycle (PLC). The simulation results reveal that (1) PLC affects the dynamic process of absorptive capacity; (2) the absorptive capacity of a firm peaks in the growth stage of PLC, and (3) the market demand at different PLC stages is the main driving force in firms' technological innovations. This study also explores a sensitivity simulation using the variables of (1) time spent in founding an external knowledge network, (2) research and development period, and (3) knowledge diversity. The sensitivity simulation results show that the changes of these three variables have a greater impact on absorptive capacity and technological innovation during growth and maturity stages than in the introduction and declining stages of PLC. We provide suggestions on how firms can adjust management policies to improve their absorptive capacity and technological innovation performance during different PLC stages.
Innovation in healthcare. The challenge for laboratory medicine.
Price, Christopher P; St John, Andrew
2014-01-01
The delivery of healthcare is the product of a complex organization and it is not entirely surprising that innovation is not always considered to deliver on the expectations generated by invention. As policymakers and payers seek to improve the quality and value-for-money of healthcare, more attention is being directed at the barriers to innovation, and the challenges of translating inventions into outcomes. Laboratory medicine is one facet of healthcare that has generated considerable levels of invention but, while showing increasing volumes of activity over the past decades, it has not been recognized for generating the benefit in outcomes that might have been expected. One of the major reasons for this position has been the poor quality of evidence available to demonstrate the impact of laboratory investigations on health outcomes. Consequently an absence of evidence stifles the opportunity to develop the business case that demonstrates the link between test result and improved outcome. This has a major influence on the success of innovation in laboratory medicine. This review explores the process of innovation applied to laboratory medicine and offers an insight into how the impact of laboratory medicine on health outcomes can be improved. © 2013.
Entrepreneurship Psychological Characteristics of Nurses.
Dehghanzadeh, Mohammad Reza; Kholasehzadeh, Golrasteh; Birjandi, Masoumeh; Antikchi, Ensieh; Sobhan, Mohamad Reza; Neamatzadeh, Hossein
2016-09-01
Nurses are full partners with other health care professionals. Until fairly recently the scope of nurses potential in entrepreneurship has not been widely recognized. The present study tries to evaluate entrepreneurship psychological characteristics among nurses. The survey instrument included scales measuring entrepreneurship psychological characteristics including locus of control, need for achievement, risk taking propensity, ambiguity tolerance, and innovation, among nurses in the Shahid Sadoughi Hospital, Yazd, Iran in 2013. In this study average of entrepreneurship psychological characteristics of nurses which are higher than standard mark. The majority of the nurses have average entrepreneurship (20.4%). It means that they have some strong entrepreneurship. The result show that average of the need for achievement is 34.5%, the locus of control 33.8%, risk taking propensity 33.2%, ambiguity tolerance 34.2%, and innovation 41.6%. The results indicate that the 4 dimensions of the need for achievement, risk taking, Ambiguity Tolerance, and Innovation were significant. However, the locus of control is not significant at a 0.05 significance level. In terms, entrepreneurially nurses are comparatively more innovative, have risk taking attitudes, need for achievement, Ambiguity Tolerance, and Innovation. Results largely support significant positive relationships between psychological traits and entrepreneurial orientations.
Design Possibilities for the e-Schoolbag: Addressing the 1:1 Challenge within China
ERIC Educational Resources Information Center
Gu, Xiaoqing; Xu, Xiaojuan; Wang, Huawen; Crook, Charles
2017-01-01
There is widespread enthusiasm for 1:1 computing in education. Recognizing that secure innovation of educational practice should be built upon contextual sensitivity, this article reported two case studies anticipating the potential development of 1:1 classes in the particular cultural context of China. The first case described how the new…
ERIC Educational Resources Information Center
van Dam-Mieras, Rietje; Lansu, Angelique; Rieckmann, Marco; Michelsen, Gerd
2008-01-01
The purpose of this article is to describe a joint effort between three European and six Latin American universities to create an international Master's degree program on Sustainable Development and Management. Faculty members from these institutions are working together on this unusual and innovative project, which recognizes the importance of…
Textiles, Tariffs, and Turnarounds: Profits Improved.
ERIC Educational Resources Information Center
Aronoff, Craig
1986-01-01
The U.S. textile industry may serve as a classic study on regeneration through market forces. The industry has recently made a turnaround in profits after having been recognized as an industry that was losing most of its profits to overseas producers. The reasons for the emerging strength of the industry is that it began to innovate after a…
Teaching the Short Story: A Guide to Using Stories from around the World.
ERIC Educational Resources Information Center
Neumann, Bonnie H., Ed.; McDonnell, Helen M., Ed.
An innovative and practical resource for teachers looking to move beyond English and American works, this book explores 175 highly teachable short stories from nearly 50 countries, highlighting the work of recognized authors from practically every continent, authors such as Chinua Achebe, Anita Desai, Nadine Gordimer, Milan Kundera, Isak Dinesen,…
ERIC Educational Resources Information Center
Alemneh, Daniel Gelaw
2009-01-01
Digital preservation is a significant challenge for cultural heritage institutions and other repositories of digital information resources. Recognizing the critical role of metadata in any successful digital preservation strategy, the Preservation Metadata Implementation Strategies (PREMIS) has been extremely influential on providing a "core" set…
Introduction: The Growing Importance of Traditional Forest-Related Knowledge
Ronald L. Trosper; John A. Parrotta
2012-01-01
The knowledge, innovations, and practices of local and indigenous communities have supported their forest-based livelihoods for countless generations. The role of traditional knowledgeâand the bio-cultural diversity it sustainsâis increasingly recognized as important by decision makers, conservation and development organizations, and the scientifi c community. However...
Starting Young: Massachusetts Birth-3rd Grade Policies That Support Children's Literacy Development
ERIC Educational Resources Information Center
Cook, Shayna; Bornfreund, Laura
2015-01-01
Massachusetts is one of a handful of states that is often recognized as a leader in public education, and for good reason. The Commonwealth consistently outperforms most states on national reading and math tests and often leads the pack in education innovations. "Starting Young: Massachusetts Birth-3rd Grade Policies that Support Children's…
ERIC Educational Resources Information Center
de Koster, S.; Kuiper, E.; Volman, M.
2012-01-01
The fit between existing educational practices and promoted classroom use of information and communication technologies is increasingly recognized as a factor in successful integration of such technologies in classroom practice. Using a descriptive multiple-case study design, we characterize the types of information and communication technology…
Casa De La Solidaridad: A Pedagogy of Solidarity
ERIC Educational Resources Information Center
Yonkers-Talz, Kevin
2013-01-01
Casa de la Solidaridad has been recognized as an innovative and effective educational model within Jesuit higher education yet, until now, there have only been verbal presentations of the unique attributes of the Casa de la Solidaridad model. In addition, there has been a lack of information regarding the influence of the Casa experience on the…
ERIC Educational Resources Information Center
Campbell, Dale F.; Yu, Hongwei
2010-01-01
The Community College Futures Assembly has served as a national independent policy think tank since 1995. Its purpose is to articulate the critical issues facing American community colleges and recognize innovative programs. Convening annually in January in Orlando, Florida, the Assembly provides an interactive learning environment where tough…
ERIC Educational Resources Information Center
Zielinski, K.; Czekierda, L.; Malawski, F.; Stras, R.; Zielinski, S.
2017-01-01
In this paper, we address the problem of an educational gap existing between high schools and universities: many students consider their choice of field of study as inappropriate, mostly due to insufficient information regarding the discipline and the university educational process. To solve this problem, we define an innovative, information and…
ERIC Educational Resources Information Center
Parveen, Shaheen; Pater, Cayley
2012-01-01
Responding to the need for foreign language fluency in ever-globalizing business and cultural spheres, the federal government and foreign language institutions in an eleven-member task force collaboratively published a set of nationally recognized, foundational standards for foreign language teaching. Rather than rely on teacher-centered…
Zolotopia: A New Classic for Design
ERIC Educational Resources Information Center
Payne, Janet
2007-01-01
While working on a graphic design job at FAO Schwartz, entrepreneurs Sandra Higashi and Byron Glaser recognized a need for something new in toys. The result was the birth of Zolo, an innovative, interactive toy, designed and produced by Higashi and Glaser and distributed by the Museum of Modern Art (MoMA) in New York. The initial idea for Zolo…
Strengthening Mental Health Services in Head Start: A Challenge for the 1990s.
ERIC Educational Resources Information Center
Piotrkowski, Chaya S.; And Others
1994-01-01
The Head Start community has recognized that the mental health program has been a weak spot in the program's comprehensive services strategy. A brief telephone survey of 101 programs identified the services most desired and resulted in recommendations for strengthening this aspect and making the program a laboratory for innovative practices. (SLD)
Dual Mission: An Innovative Field Model for Training Social Work Students for Work with Veterans
ERIC Educational Resources Information Center
Selber, Katherine; Chavkin, Nancy Feyl; Biggs, Mary Jo Garcia
2015-01-01
This descriptive article explores a collaborative model that blends the dual missions of training social work students to work with military personnel, veterans, and their families while serving student veterans on campus. The model consists of 2 main components: (1) a nationally recognized service component for providing academic, health and…
ERIC Educational Resources Information Center
Altstadt, David
2012-01-01
States wrestling with the challenge of increasing community college student completion rates recognize that a critical next step is building support among faculty for reform efforts. Faculty can play a crucial role in bridging the historic divide between policy and practice. Empowering faculty to take a substantive role in informing policy…
The Way up, down under: Innovations Shape Learning at Science and Math School
ERIC Educational Resources Information Center
Bissaker, Kerry; Davies, Jim; Heath, Jayne
2011-01-01
Professor John Rice, a pioneer of the Australian Science and Mathematics School (ASMS), recognized that schools' curricula were at odds with the kind of science and mathematics driving the new economy. In addition to curriculum that lacked relevance to contemporary life, negative student attitudes and a shortage of qualified science and…
Developing an Institution-Wide Web-Based Research Request and Preliminary Budget Development System
ERIC Educational Resources Information Center
Glenn, Julia L.; Sampson, Royce R.
2011-01-01
While medical research may often be regarded by academics and the general population in terms of the remarkable science being conducted or the study participants willing to volunteer their time for the advancement of medical innovation, many in the research administration field recognize the tremendous amount of effort that goes on behind the…
NREL Staff Recognized by DOE for Outstanding Achievements at 2017 Annual
-understand failure modes, particularly overheating, in lithium-ion batteries. And the innovative Isothermal support of DOE's Hybrid and Electric Vehicle Battery R&D Program. His work on battery thermal developing a first-of-its kind Battery Internal Short Circuit Device in collaboration with NASA to better
NREL Scientists and Engineers Recognized for Top Innovations | NREL | News
commercially available, large-format isothermal battery calorimeter for lithium-ion battery safety testing to test the performance and safety of large-format lithium-ion batteries used extensively in electric develop NREL intellectual property representing an isothermal battery calorimeter. The technical
Convergent bacterial microbiotas in the fungal agricultural systems of insects
Frank O. Aylward; Garret Suen; Peter H. Biedermann; Aaron S. Adams; Jarrod J. Scott; Stephanie A. Malfatti; Tijana Glavina del Rio; Susannah G. Tringe; Michael Poulsen; Kenneth F. Raffa; Kier D. Kelpzig; Cameron R. Currie
2014-01-01
The ability to cultivate food is an innovation that has produced some of the most successful ecological strategies on the planet. Although most well recognized in humans, where agriculture represents a defining feature of civilization, species of ants, beetles, and termites have also independently evolved symbioses with fungi that they cultivate for food. Despite...
ERIC Educational Resources Information Center
Cantor, Jeffrey A.
Business is recognizing that the associate degree is a necessity in high technology fields. Innovative practices link businesses with community colleges, allowing apprentices to gain basic job skills and a higher education. A research study explored three industries and their relationship with community colleges: automotive, construction, and…
Novel Fast Pyrolysis/Catalytic Technology for the Production of Stable Upgraded Liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyama, Ted; Agblevor, Foster; Battaglia, Francine
The objective of the proposed research is the demonstration and development of a novel biomass pyrolysis technology for the production of a stable bio-oil. The approach is to carry out catalytic hydrodeoxygenation (HDO) and upgrading together with pyrolysis in a single fluidized bed reactor with a unique two-level design that permits the physical separation of the two processes. The hydrogen required for the HDO will be generated in the catalytic section by the water-gas shift reaction employing recycled CO produced from the pyrolysis reaction itself. Thus, the use of a reactive recycle stream is another innovation in this technology. Themore » catalysts will be designed in collaboration with BASF Catalysts LLC (formerly Engelhard Corporation), a leader in the manufacture of attrition-resistant cracking catalysts. The proposed work will include reactor modeling with state-of-the-art computational fluid dynamics in a supercomputer, and advanced kinetic analysis for optimization of bio-oil production. The stability of the bio-oil will be determined by viscosity, oxygen content, and acidity determinations in real and accelerated measurements. A multi-faceted team has been assembled to handle laboratory demonstration studies and computational analysis for optimization and scaleup.« less
Science & Technology Review October 2005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aufderheide III, M B
This month's issue has the following articles: (1) Important Missions, Great Science, and Innovative Technology--Commentary by Cherry A. Murray; (2) NanoFoil{reg_sign} Solders with Less Heat--Soldering and brazing to join an array of materials are now Soldering and brazing to join an array of materials are now possible without furnaces, torches, or lead; (3) Detecting Radiation on the Move--An award-winning technology can detect even small amounts An award-winning technology can detect even small amounts of radioactive material in transit; (4) Identifying Airborne Pathogens in Time to Respond--A mass spectrometer identifies airborne spores in less than A mass spectrometer identifies airborne sporesmore » in less than a minute with no false positives; (5) Picture Perfect with VisIt--The Livermore-developed software tool VisIt helps scientists The Livermore-developed software tool VisIt helps scientists visualize and analyze large data sets; (6) Revealing the Mysteries of Water--Scientists are using Livermore's Thunder supercomputer and new algorithms to understand the phases of water; and (7) Lightweight Target Generates Bright, Energetic X Rays--Livermore scientists are producing aerogel targets for use in inertial Livermore scientists are producing aerogel targets for use in inertial confinement fusion experiments and radiation-effects testing.« less
Douglas, Susan; Button, Suzanne; Casey, Susan E
2016-05-01
Measurement feedback systems (MFSs) are increasingly recognized as evidence-based treatments for improving mental health outcomes, in addition to being a useful administrative tool for service planning and reporting. Promising research findings have driven practice administrators and policymakers to emphasize the incorporation of outcomes monitoring into electronic health systems. To promote MFS integrity and protect against potentially negative outcomes, it is vital that adoption and implementation be guided by scientifically rigorous yet practical principles. In this point of view, the authors discuss and provide examples of three user-centered and theory-based principles: emphasizing integration with clinical values and workflow, promoting administrative leadership with the 'golden thread' of data-informed decision-making, and facilitating sustainability by encouraging innovation. In our experience, enacting these principles serves to promote sustainable implementation of MFSs in the community while also allowing innovation to occur, which can inform improvements to guide future MFS research.
A human-centered framework for innovation in conservation incentive programs.
Sorice, Michael G; Donlan, C Josh
2015-12-01
The promise of environmental conservation incentive programs that provide direct payments in exchange for conservation outcomes is that they enhance the value of engaging in stewardship behaviors. An insidious but important concern is that a narrow focus on optimizing payment levels can ultimately suppress program participation and subvert participants' internal motivation to engage in long-term conservation behaviors. Increasing participation and engendering stewardship can be achieved by recognizing that participation is not simply a function of the payment; it is a function of the overall structure and administration of the program. Key to creating innovative and more sustainable programs is fitting them within the existing needs and values of target participants. By focusing on empathy for participants, co-designing program approaches, and learning from the rapid prototyping of program concepts, a human-centered approach to conservation incentive program design enhances the propensity for discovery of novel and innovative solutions to pressing conservation issues.
High performance computing applications in neurobiological research
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.
1994-01-01
The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.
Multi-petascale highly efficient parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less
The TESS science processing operations center
NASA Astrophysics Data System (ADS)
Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland
2016-08-01
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
CFD code evaluation for internal flow modeling
NASA Technical Reports Server (NTRS)
Chung, T. J.
1990-01-01
Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.
Internal computational fluid mechanics on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Andersen, Bernhard H.; Benson, Thomas J.
1987-01-01
The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.
Supercomputer modeling of hydrogen combustion in rocket engines
NASA Astrophysics Data System (ADS)
Betelin, V. B.; Nikitin, V. F.; Altukhov, D. I.; Dushin, V. R.; Koo, Jaye
2013-08-01
Hydrogen being an ecological fuel is very attractive now for rocket engines designers. However, peculiarities of hydrogen combustion kinetics, the presence of zones of inverse dependence of reaction rate on pressure, etc. prevents from using hydrogen engines in all stages not being supported by other types of engines, which often brings the ecological gains back to zero from using hydrogen. Computer aided design of new effective and clean hydrogen engines needs mathematical tools for supercomputer modeling of hydrogen-oxygen components mixing and combustion in rocket engines. The paper presents the results of developing verification and validation of mathematical model making it possible to simulate unsteady processes of ignition and combustion in rocket engines.