Sample records for science computing facility

  1. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  2. The grand challenge of managing the petascale facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less

  3. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  4. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  5. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  6. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  7. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  8. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  9. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  10. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.

  11. LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2018-01-24

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  12. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yelick, Kathy

    2012-02-02

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  13. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2017-12-09

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  14. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X

  15. Sandia National Laboratories: Locations: Kauai Test Facility

    Science.gov Websites

    Defense Systems & Assessments About Defense Systems & Assessments Program Areas Accomplishments Foundations Bioscience Computing & Information Science Electromagnetics Engineering Science Geoscience Suppliers iSupplier Account Accounts Payable Contract Information Construction & Facilities Contract

  16. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  17. Circus: A Replicated Procedure Call Facility

    DTIC Science & Technology

    1984-08-01

    Computer Science Laboratory, Xerox PARC, July 1082 . [24) Bruce Ja.y Nelson. Remote Procedure Ctdl. Ph.D. dissertation, Computer Science Department...t. Ph.D. dissertation, Computer Science Division, University of California, Berkeley, Xerox PARC report number CSIF 82-7, December 1082 . [30...Tandem Computers Inc. GUARDIAN Opet’ating Sy•tem Programming Mt~nulll, Volumu 1 11nd 2. C upertino, California, 1082 . [31) R. H. Thoma.s. A majority

  18. Computers as learning resources in the health sciences: impact and issues.

    PubMed Central

    Ellis, L B; Hannigan, G G

    1986-01-01

    Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843

  19. Research in progress and other activities of the Institute for Computer Applications in Science and Engineering

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics and computer science during the period April 1, 1993 through September 30, 1993. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustic and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  20. Research in progress in applied mathematics, numerical analysis, fluid mechanics, and computer science

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  1. Fusion Energy Sciences Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and Fusion Energy Sciences, January 27-29, 2016, Gaithersburg, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Choong-Seock; Greenwald, Martin; Riley, Katherine

    The additional computing power offered by the planned exascale facilities could be transformational across the spectrum of plasma and fusion research — provided that the new architectures can be efficiently applied to our problem space. The collaboration that will be required to succeed should be viewed as an opportunity to identify and exploit cross-disciplinary synergies. To assess the opportunities and requirements as part of the development of an overall strategy for computing in the exascale era, the Exascale Requirements Review meeting of the Fusion Energy Sciences (FES) community was convened January 27–29, 2016, with participation from a broad range ofmore » fusion and plasma scientists, specialists in applied mathematics and computer science, and representatives from the U.S. Department of Energy (DOE) and its major computing facilities. This report is a summary of that meeting and the preparatory activities for it and includes a wealth of detail to support the findings. Technical opportunities, requirements, and challenges are detailed in this report (and in the recent report on the Workshop on Integrated Simulation). Science applications are described, along with mathematical and computational enabling technologies. Also see http://exascaleage.org/fes/ for more information.« less

  2. ICASE

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science.

  3. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  4. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  5. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  6. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  7. Transformative Connections: Community-Based K-12 Computing Program Strives to Strengthen Academic and Career Aspirations of Its Participants

    ERIC Educational Resources Information Center

    Roach, Ronald

    2005-01-01

    The Joint Educational Facilities Inc. (JEF) computer science program has as its goal to acquaint minority and socially disadvantaged K-12 students with computer science basics and the innovative subdisciplines within the field, and to reinforce the college ambitions of participants or help them consider college as an option. A non-profit…

  8. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  9. The OSG open facility: A sharing ecosystem

    DOE PAGES

    Jayatilaka, B.; Levshina, T.; Rynge, M.; ...

    2015-12-23

    The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less

  10. The Center for Nanophase Materials Sciences

    NASA Astrophysics Data System (ADS)

    Lowndes, Douglas

    2005-03-01

    The Center for Nanophase Materials Sciences (CNMS) located at Oak Ridge National Laboratory (ORNL) will be the first DOE Nanoscale Science Research Center to begin operation, with construction to be completed in April 2005 and initial operations in October 2005. The CNMS' scientific program has been developed through workshops with the national community, with the goal of creating a highly collaborative research environment to accelerate discovery and drive technological advances. Research at the CNMS is organized under seven Scientific Themes selected to address challenges to understanding and to exploit particular ORNL strengths (see http://cnms.ornl.govhttp://cnms.ornl.gov). These include extensive synthesis and characterization capabilities for soft, hard, nanostructured, magnetic and catalytic materials and their composites; neutron scattering at the Spallation Neutron Source and High Flux Isotope Reactor; computational nanoscience in the CNMS' Nanomaterials Theory Institute and utilizing facilities and expertise of the Center for Computational Sciences and the new Leadership Scientific Computing Facility at ORNL; a new CNMS Nanofabrication Research Laboratory; and a suite of unique and state-of-the-art instruments to be made reliably available to the national community for imaging, manipulation, and properties measurements on nanoscale materials in controlled environments. The new research facilities will be described together with the planned operation of the user research program, the latter illustrated by the current ``jump start'' user program that utilizes existing ORNL/CNMS facilities.

  11. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  12. Facilities | Computational Science | NREL

    Science.gov Websites

    technology innovation by providing scientists and engineers the ability to tackle energy challenges that scientists and engineers to take full advantage of advanced computing hardware and software resources

  13. X-ray ptychography, fluorescence microscopy combo sheds new light on trace

    Science.gov Websites

    Research Divisions Computing, Environment and Life Sciences BIOBiosciences CPSComputational Science DSLData CEESCenter for Electrochemical Energy Science CTRCenter for Transportation Research CRIChain Reaction Navigation Toggle Search Energy Environment National Security User Facilities Science Work with Us About

  14. Langley Aerospace Research Summer Scholars. Part 2

    NASA Technical Reports Server (NTRS)

    Schwan, Rafaela (Compiler)

    1995-01-01

    The Langley Aerospace Research Summer Scholars (LARSS) Program was established by Dr. Samuel E. Massenberg in 1986. The program has increased from 20 participants in 1986 to 114 participants in 1995. The program is LaRC-unique and is administered by Hampton University. The program was established for the benefit of undergraduate juniors and seniors and first-year graduate students who are pursuing degrees in aeronautical engineering, mechanical engineering, electrical engineering, material science, computer science, atmospheric science, astrophysics, physics, and chemistry. Two primary elements of the LARSS Program are: (1) a research project to be completed by each participant under the supervision of a researcher who will assume the role of a mentor for the summer, and (2) technical lectures by prominent engineers and scientists. Additional elements of this program include tours of LARC wind tunnels, computational facilities, and laboratories. Library and computer facilities will be available for use by the participants.

  15. Technical Reports: Langley Aerospace Research Summer Scholars. Part 1

    NASA Technical Reports Server (NTRS)

    Schwan, Rafaela (Compiler)

    1995-01-01

    The Langley Aerospace Research Summer Scholars (LARSS) Program was established by Dr. Samuel E. Massenberg in 1986. The program has increased from 20 participants in 1986 to 114 participants in 1995. The program is LaRC-unique and is administered by Hampton University. The program was established for the benefit of undergraduate juniors and seniors and first-year graduate students who are pursuing degrees in aeronautical engineering, mechanical engineering, electrical engineering, material science, computer science, atmospheric science, astrophysics, physics, and chemistry. Two primary elements of the LARSS Program are: (1) a research project to be completed by each participant under the supervision of a researcher who will assume the role of a mentor for the summer, and (2) technical lectures by prominent engineers and scientists. Additional elements of this program include tours of LARC wind tunnels, computational facilities, and laboratories. Library and computer facilities will be available for use by the participants.

  16. Apollo experience report: Apollo lunar surface experiments package data processing system

    NASA Technical Reports Server (NTRS)

    Eason, R. L.

    1974-01-01

    Apollo Program experience in the processing of scientific data from the Apollo lunar surface experiments package, in which computers and associated hardware and software were used, is summarized. The facility developed for the preprocessing of the lunar science data is described, as are several computer facilities and programs used by the Principal Investigators. The handling, processing, and analyzing of lunar science data and the interface with the Principal Investigators are discussed. Pertinent problems that arose in the development of the data processing schemes are discussed so that future programs may benefit from the solutions to the problems. The evolution of the data processing techniques for lunar science data related to recommendations for future programs of this type.

  17. Sandia QIS Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Richard P.

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  18. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  19. Multiscale Computation. Needs and Opportunities for BER Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheibe, Timothy D.; Smith, Jeremy C.

    2015-01-01

    The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSLmore » decisions regarding future computational (hardware and software) architectures.« less

  20. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Facilities and Centers Staff Center for X-ray Optics Patrick Naulleau Director 510-486-4529 2-432 PNaulleau

  1. Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugent, Peter E.; Simonson, J. Michael

    2011-10-24

    This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities. Their charge was to identify current and anticipated issues inmore » the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities. The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were: Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication. Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models. Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets. Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content. It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends. Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities: Theory and analysis components should be integrated seamlessly within experimental workflow. Develop new algorithms for data analysis based on common data formats and toolsets. Move analysis closer to experiment. Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency. Match data management access and capabilities with advancements in detectors and sources. Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments. This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities. To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data« less

  2. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  3. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  4. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  5. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Wooley; Herbert S. Lin

    This study is the first comprehensive NRC study that suggests a high-level intellectual structure for Federal agencies for supporting work at the biology/computing interface. The report seeks to establish the intellectual legitimacy of a fundamentally cross-disciplinary collaboration between biologists and computer scientists. That is, while some universities are increasingly favorable to research at the intersection, life science researchers at other universities are strongly impeded in their efforts to collaborate. This report addresses these impediments and describes proven strategies for overcoming them. An important feature of the report is the use of well-documented examples that describe clearly to individuals not trainedmore » in computer science the value and usage of computing across the biological sciences, from genes and proteins to networks and pathways, from organelles to cells, and from individual organisms to populations and ecosystems. It is hoped that these examples will be useful to students in the life sciences to motivate (continued) study in computer science that will enable them to be more facile users of computing in their future biological studies.« less

  7. WORLDWIDE COLLECTION AND EVALUATION OF EARTHQUAKE DATA

    DTIC Science & Technology

    period, the hypocenter and magnitude programs were tested and then used to process January 1964 data at the computer facilities of the Environmental ... Science Services Administration (ESSA), Suitland, Maryland, using the CDC 6600 computer. Results of this processing are shown.

  8. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  9. Los Alamos Science Facilities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  10. Sandia National Laboratories: Careers: Hiring Process

    Science.gov Websites

    Suppliers iSupplier Account Accounts Payable Contract Information Construction & Facilities Contract Foundations Bioscience Computing & Information Science Electromagnetics Engineering Science Geoscience notifications. Visit our Careers tool to search for jobs and register for an account. Registering will enable

  11. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  12. Abstracts of Research, July 1975-June 1976.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Computer and Information Science Research Center.

    Abstracts of research papers in computer and information science are given for 62 papers in the areas of information storage and retrieval; computer facilities; information analysis; linguistics analysis; artificial intelligence; information processes in physical, biological, and social systems; mathematical technigues; systems programming;…

  13. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Beam Analysis Behavior of Lithium Metal across a Rigid Block Copolymer Electrolyte Membrane. Journal of the

  14. Science & Technology Review: September 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, Ramona L.; Meissner, Caryn N.; Chinn, Ken B.

    2016-09-30

    This is the September issue of the Lawrence Livermore National Laboratory's Science & Technology Review, which communicates, to a broad audience, the Laboratory’s scientific and technological accomplishments in fulfilling its primary missions. This month, there are features on "Laboratory Investments Drive Computational Advances" and "Laying the Groundwork for Extreme-Scale Computing." Research highlights include "Nuclear Data Moves into the 21st Century", "Peering into the Future of Lick Observatory", and "Facility Drives Hydrogen Vehicle Innovations."

  15. Cumulative reports and publications

    NASA Technical Reports Server (NTRS)

    1993-01-01

    A complete list of Institute for Computer Applications in Science and Engineering (ICASE) reports are listed. Since ICASE reports are intended to be preprints of articles that will appear in journals or conference proceedings, the published reference is included when it is available. The major categories of the current ICASE research program are: applied and numerical mathematics, including numerical analysis and algorithm development; theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and computer science.

  16. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah

    2009-12-01

    In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less

  17. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  18. Automated smear counting and data processing using a notebook computer in a biomedical research facility.

    PubMed

    Ogata, Y; Nishizawa, K

    1995-10-01

    An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.

  19. Expanding Your Laboratory by Accessing Collaboratory Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyt, David W.; Burton, Sarah D.; Peterson, Michael R.

    2004-03-01

    The Environmental Molecular Sciences Laboratory (EMSL) in Richland, Washington, is the home of a research facility setup by the United States Department of Energy (DOE). The facility is atypical because it houses over 100 cutting-edge research systems for the use of researchers all over the United States and the world. Access to the lab is requested through a peer-review proposal process and the scientists who use the facility are generally referred to as ‘users’. There are six main research facilities housed in EMSL, all of which host visiting researchers. Several of these facilities also participate in the EMSL Collaboratory, amore » remote access capability supported by EMSL operations funds. Of these, the High-Field Magnetic Resonance Facility (HFMRF) and Molecular Science Computing Facility (MSCF) have a significant number of their users performing remote work. The HFMRF in EMSL currently houses 12 NMR spectrometers that range in magnet field strength from 7.05T to 21.1T. Staff associated with the NMR facility offers scientific expertise in the areas of structural biology, solid-state materials/catalyst characterization, and magnetic resonance imaging (MRI) techniques. The way in which the HFMRF operates, with a high level of dedication to remote operation across the full suite of High-Field NMR spectrometers, has earned it the name “Virtual NMR Facility”. This review will focus on the operational aspects of remote research done in the High-Field Magnetic Resonance Facility and the computer tools that make remote experiments possible.« less

  20. Why the Petascale era will drive improvements in the management of the full lifecycle of earth science data.

    NASA Astrophysics Data System (ADS)

    Wyborn, L.

    2012-04-01

    The advent of the petascale era, in both storage and compute facilities, will offer new opportunities for earth scientists to transform the way they do their science and to undertake cross-disciplinary science at a global scale. No longer will data have to be averaged and subsampled: it can be analysed to its fullest resolution at national or even global scales. Much larger data volumes can be analysed in single passes and at higher resolution: large scale cross domain science is now feasible. However, in general, earth sciences have been slow to capitalise on the potential of these new petascale compute facilities: many struggle to even use terascale facilities. Our chances of using these new facilities will require a vast improvement in the management of the full life cycle of data: in reality it will need to be transformed. Many of our current issues with earth science data are historic and stem from the limitations of early data storage systems. As storage was so expensive, metadata was usually stored separate from the data and attached as a readme file. Likewise, attributes that defined uncertainty, reliability and traceability were recoded in lab note books and rarely stored with the data. Data were routinely transferred as files. The new opportunities require that the traditional discover, display and locally download and process paradigm is too limited. For data access and assimilation to be improved, data will need to be self describing. For heterogeneous data to be rapidly integrated attributes such as reliability, uncertainty and traceability will need to be systematically recorded with each observation. The petascale era also requires that individual data files be transformed and aggregated into calibrated data arrays or data cubes. Standards become critical and are the enablers of integration. These changes are common to almost every science discipline. What makes earth sciences unique is that many domains record time series data, particularly in the environmental geosciences areas (weathering, soil changes, climate change). The data life cycle will be measured in decades and centuries, not years. Preservation over such time spans is quite a challenge to the earth sciences as data will have to be managed over many evolutions of software and hardware. The focus has to be on managing the data and not the media. Currently storage is not an issue, but it is predicted that data volumes will soon exceed the effective storage media than can be physically manufactured. This means that organisations will have to think about disposal and destruction of data. For earth sciences, this will be a particularly sensitive issue. Petascale computing offers many new opportunities to the earth sciences and by 2020 exascale computers will be a reality. To fully realise these opportunities the earth sciences needs to actively and systematically rethink what the ramifications of these new systems will have on current practices for data storage, discovery, access and assimilation.

  1. Oak Ridge National Laboratory Core Competencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberto, J.B.; Anderson, T.D.; Berven, B.A.

    1994-12-01

    A core competency is a distinguishing integration of capabilities which enables an organization to deliver mission results. Core competencies represent the collective learning of an organization and provide the capacity to perform present and future missions. Core competencies are distinguishing characteristics which offer comparative advantage and are difficult to reproduce. They exhibit customer focus, mission relevance, and vertical integration from research through applications. They are demonstrable by metrics such as level of investment, uniqueness of facilities and expertise, and national impact. The Oak Ridge National Laboratory (ORNL) has identified four core competencies which satisfy the above criteria. Each core competencymore » represents an annual investment of at least $100M and is characterized by an integration of Laboratory technical foundations in physical, chemical, and materials sciences; biological, environmental, and social sciences; engineering sciences; and computational sciences and informatics. The ability to integrate broad technical foundations to develop and sustain core competencies in support of national R&D goals is a distinguishing strength of the national laboratories. The ORNL core competencies are: 9 Energy Production and End-Use Technologies o Biological and Environmental Sciences and Technology o Advanced Materials Synthesis, Processing, and Characterization & Neutron-Based Science and Technology. The distinguishing characteristics of each ORNL core competency are described. In addition, written material is provided for two emerging competencies: Manufacturing Technologies and Computational Science and Advanced Computing. Distinguishing institutional competencies in the Development and Operation of National Research Facilities, R&D Integration and Partnerships, Technology Transfer, and Science Education are also described. Finally, financial data for the ORNL core competencies are summarized in the appendices.« less

  2. Toward a Big Data Science: A challenge of "Science Cloud"

    NASA Astrophysics Data System (ADS)

    Murata, Ken T.; Watanabe, Hidenobu

    2013-04-01

    During these 50 years, along with appearance and development of high-performance computers (and super-computers), numerical simulation is considered to be a third methodology for science, following theoretical (first) and experimental and/or observational (second) approaches. The variety of data yielded by the second approaches has been getting more and more. It is due to the progress of technologies of experiments and observations. The amount of the data generated by the third methodologies has been getting larger and larger. It is because of tremendous development and programming techniques of super computers. Most of the data files created by both experiments/observations and numerical simulations are saved in digital formats and analyzed on computers. The researchers (domain experts) are interested in not only how to make experiments and/or observations or perform numerical simulations, but what information (new findings) to extract from the data. However, data does not usually tell anything about the science; sciences are implicitly hidden in the data. Researchers have to extract information to find new sciences from the data files. This is a basic concept of data intensive (data oriented) science for Big Data. As the scales of experiments and/or observations and numerical simulations get larger, new techniques and facilities are required to extract information from a large amount of data files. The technique is called as informatics as a fourth methodology for new sciences. Any methodologies must work on their facilities: for example, space environment are observed via spacecraft and numerical simulations are performed on super-computers, respectively in space science. The facility of the informatics, which deals with large-scale data, is a computational cloud system for science. This paper is to propose a cloud system for informatics, which has been developed at NICT (National Institute of Information and Communications Technology), Japan. The NICT science cloud, we named as OneSpaceNet (OSN), is the first open cloud system for scientists who are going to carry out their informatics for their own science. The science cloud is not for simple uses. Many functions are expected to the science cloud; such as data standardization, data collection and crawling, large and distributed data storage system, security and reliability, database and meta-database, data stewardship, long-term data preservation, data rescue and preservation, data mining, parallel processing, data publication and provision, semantic web, 3D and 4D visualization, out-reach and in-reach, and capacity buildings. Figure (not shown here) is a schematic picture of the NICT science cloud. Both types of data from observation and simulation are stored in the storage system in the science cloud. It should be noted that there are two types of data in observation. One is from archive site out of the cloud: this is a data to be downloaded through the Internet to the cloud. The other one is data from the equipment directly connected to the science cloud. They are often called as sensor clouds. In the present talk, we first introduce the NICT science cloud. We next demonstrate the efficiency of the science cloud, showing several scientific results which we achieved with this cloud system. Through the discussions and demonstrations, the potential performance of sciences cloud will be revealed for any research fields.

  3. The 1984 NASA/ASEE summer faculty fellowship program

    NASA Technical Reports Server (NTRS)

    Mcinnis, B. C.; Duke, M. B.; Crow, B.

    1984-01-01

    An overview is given of the program management and activities. Participants and research advisors are listed. Abstracts give describe and present results of research assignments performed by 31 fellows either at the Johnson Space Center, at the White Sands test Facility, or at the California Space Institute in La Jolla. Disciplines studied include engineering; biology/life sciences; Earth sciences; chemistry; mathematics/statistics/computer sciences; and physics/astronomy.

  4. How Data Becomes Physics: Inside the RACF

    ScienceCinema

    Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris

    2018-06-22

    The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.

  5. MaRIE: A facility for time-dependent materials science at the mesoscale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Cris William; Kippen, Karen Elizabeth

    To meet new and emerging national security issues the Laboratory is stepping up to meet another grand challenge—transitioning from observing to controlling a material’s performance. This challenge requires the best of experiment, modeling, simulation, and computational tools. MaRIE is the Laboratory’s proposed flagship experimental facility intended to meet the challenge.

  6. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and to reduce the total volume of data communicated. Use of Titan has enabled ECMWF to plan future scalability developments and resource requirements. We will also discuss the best practices developed over the years in navigating logistical, legal and regulatory hurdles involved in supporting the facility's diverse user community.

  7. The space physics analysis network

    NASA Astrophysics Data System (ADS)

    Green, James L.

    1988-04-01

    The Space Physics Analysis Network, or SPAN, is emerging as a viable method for solving an immediate communication problem for space and Earth scientists and has been operational for nearly 7 years. SPAN and its extension into Europe, utilizes computer-to-computer communications allowing mail, binary and text file transfer, and remote logon capability to over 1000 space science computer systems. The network has been used to successfully transfer real-time data to remote researchers for rapid data analysis but its primary function is for non-real-time applications. One of the major advantages for using SPAN is its spacecraft mission independence. Space science researchers using SPAN are located in universities, industries and government institutions all across the United States and Europe. These researchers are in such fields as magnetospheric physics, astrophysics, ionosperic physics, atmospheric physics, climatology, meteorology, oceanography, planetary physics and solar physics. SPAN users have access to space and Earth science data bases, mission planning and information systems, and computational facilities for the purposes of facilitating correlative space data exchange, data analysis and space research. For example, the National Space Science Data Center (NSSDC), which manages the network, is providing facilities on SPAN such as the Network Information Center (SPAN NIC). SPAN has interconnections with several national and international networks such as HEPNET and TEXNET forming a transparent DECnet network. The combined total number of computers now reachable over these combined networks is about 2000. In addition, SPAN supports full function capabilities over the international public packet switched networks (e.g. TELENET) and has mail gateways to ARPANET, BITNET and JANET.

  8. Laboratory Directed Research and Development Program FY 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen

    2007-03-08

    The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness.« less

  9. Interdisciplinary Facilities that Support Collaborative Teaching and Learning

    ERIC Educational Resources Information Center

    Asoodeh, Mike; Bonnette, Roy

    2006-01-01

    It has become widely accepted that the computer is an indispensable tool in the study of science and technology. Thus, in recent years curricular programs such as Industrial Technology and associated scientific disciplines have been adopting and adapting the computer as a tool in new and innovative ways to support teaching, learning, and research.…

  10. ODU-CAUSE: Computer Based Learning Lab.

    ERIC Educational Resources Information Center

    Sachon, Michael W.; Copeland, Gary E.

    This paper describes the Computer Based Learning Lab (CBLL) at Old Dominion University (ODU) as a component of the ODU-Comprehensive Assistance to Undergraduate Science Education (CAUSE) Project. Emphasis is directed to the structure and management of the facility and to the software under development by the staff. Serving the ODU-CAUSE User Group…

  11. ISTP Science Data Systems and Products

    NASA Astrophysics Data System (ADS)

    Mish, William H.; Green, James L.; Reph, Mary G.; Peredo, Mauricio

    1995-02-01

    The International Solar-Terrestrial Physics (ISTP) program will provide simultaneous coordinated scientific measurements from most of the major areas of geospace including specific locations on the Earth's surface. This paper describes the comprehensive ISTP ground science data handling system which has been developed to promote optimal mission planning and efficient data processing, analysis and distribution. The essential components of this ground system are the ISTP Central Data Handling Facility (CDHF), the Information Processing Division's Data Distribution Facility (DDF), the ISTP/Global Geospace Science (GGS) Science Planning and Operations Facility (SPOF) and the NASA Data Archive and Distribution Service (NDADS). The ISTP CDHF is the one place in the program where measurements from this wide variety of geospace and ground-based instrumentation and theoretical studies are brought together. Subsequently, these data will be distributed, along with ancillary data, in a unified fashion to the ISTP Principal Investigator (PI) and Co-Investigator (CoI) teams for analysis on their local systems. The CDHF ingests the telemetry streams, orbit, attitude, and command history from the GEOTAIL, WIND, POLAR, SOHO, and IMP-8 Spacecraft; computes summary data sets, called Key Parameters (KPs), for each scientific instrument; ingests pre-computed KPs from other spacecraft and ground basel investigations; provides a computational platform for parameterized modeling; and provides a number of ‘data services” for the ISTP community of investigators. The DDF organizes the KPs, decommutated telemetry, and associated ancillary data into products for duistribution to the ISTP community on CD-ROMs. The SPOF is the component of the GGS program responsible for the development and coordination of ISTP science planning operations. The SPOF operates under the direction of the ISTP Project Scientist and is responsible for the development and coordination of the science plan for ISTP spacecraft. Instrument command requests for the WIND and POLAR investigations are submitted by the PIs to the SPOF where they are checked for science conflicts, forwarded to the GSFC Command Management Syntem/Payload Operations Control Center (CMS/POCC) for engineering conflict validation, and finally incorporated into the conflict-free science operations plan. Conflict resolution is accomplished through iteration between the PIs, SPOF and CMS and in consultation with the Project Scientist when necessary. The long term archival of ISTP KP and level-zero data will be undertaken by NASA's National Space Science Data Center using the NASA Data Archive and Distribution Service (NDADS). This on-line archive facility will provide rapid access to archived KPs and event data and includes security features to restrict access to the data during the time they are proprietary.

  12. Issues and recommendations associated with distributed computation and data management systems for the space sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The primary purpose of the report is to explore management approaches and technology developments for computation and data management systems designed to meet future needs in the space sciences.The report builds on work presented in previous reports on solar-terrestrial and planetary reports, broadening the outlook to all of the space sciences, and considering policy issues aspects related to coordiantion between data centers, missions, and ongoing research activities, because it is perceived that the rapid growth of data and the wide geographic distribution of relevant facilities will present especially troublesome problems for data archiving, distribution, and analysis.

  13. The open science grid

    NASA Astrophysics Data System (ADS)

    Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-07-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  14. Addendum report to atmospheric science facility pallet-only mode space transportation system payload feasibility study, volume 3, revision A

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The feasibility of accomplishing selected atmospheric science mission using a pallet-only mode was studied. Certain unresolved issues were identified. The first issue was that of assuring that the on-board computer facility was adequate to process scientific data, control subsystems such as instrument pointing, provide mission operational program capability, and accomplish display and control. The second issue evolved from an investigation of the availability of existing substitute instruments that could be used instead of the prime instrumentation where the development tests and schedules are incompatible with the realistic budgets and shuttle vehicle schedules. Some effort was expended on identifying candidate substitute instruments, and the performance, cost, and development schedule trade-offs found during that effort were significant enough to warrant a follow-on investigation. This addendum documents the results of that follow-on effort, as it applies to the Atmospheric Sciences Facility.

  15. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  16. Aeronautical engineering: A continuing bibliography with indexes (supplement 280)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This bibliography lists 647 reports, articles, and other documents introduced into the NASA scientific and technical information system in June, 1991. Subject coverage includes: aerodynamics, air transportation safety, aircraft communication and navigation, aircraft design and performance, aircraft instrumentation, aircraft propulsion, aircraft stability and control, research facilities, astronautics, chemistry and materials, engineering, geosciences, computer sciences, physics, and social sciences.

  17. New project to support scientific collaboration electronically

    NASA Astrophysics Data System (ADS)

    Clauer, C. R.; Rasmussen, C. E.; Niciejewski, R. J.; Killeen, T. L.; Kelly, J. D.; Zambre, Y.; Rosenberg, T. J.; Stauning, P.; Friis-Christensen, E.; Mende, S. B.; Weymouth, T. E.; Prakash, A.; McDaniel, S. E.; Olson, G. M.; Finholt, T. A.; Atkins, D. E.

    A new multidisciplinary effort is linking research in the upper atmospheric and space, computer, and behavioral sciences to develop a prototype electronic environment for conducting team science worldwide. A real-world electronic collaboration testbed has been established to support scientific work centered around the experimental operations being conducted with instruments from the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland. Such group computing environments will become an important component of the National Information Infrastructure initiative, which is envisioned as the high-performance communications infrastructure to support national scientific research.

  18. The Argonne Leadership Computing Facility 2010 annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued tomore » provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.« less

  19. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  20. Science and technology review, March 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, R.

    The articles in this month`s issue are entitled Site 300`s New Contained Firing Facility, Computational Electromagnetics: Codes and Capabilities, Ergonomics Research:Impact on Injuries, and The Linear Electric Motor: Instability at 1,000 g`s.

  1. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  2. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  3. Separating Added Value from Hype: Some Experiences and Prognostications

    NASA Astrophysics Data System (ADS)

    Reed, Dan

    2004-03-01

    These are exciting times for the interplay of science and computing technology. As new data archives, instruments and computing facilities are connected nationally and internationally, a new model of distributed scientific collaboration is emerging. However, any new technology brings both opportunities and challenges -- Grids are no exception. In this talk, we will discuss some of the experiences deploying Grid software in production environments, illustrated with experiences from the NSF PACI Alliance, the NSF Extensible Terascale Facility (ETF) and other Grid projects. From these experiences, we derive some guidelines for deployment and some suggestions for community engagement, software development and infrastructure

  4. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  5. JPRS Report, Science & Technology, USSR: Computers

    DTIC Science & Technology

    1987-07-15

    Algebras and Multilevel Program Planning (G. Ye.. Tseytlin; PROGRAMMIROVANIYE, No 3, May-Jun 86) 36 Linguistic Facilities for Programming...scientific production associations which, jointly with the USSR Academy of Sciences, will solve basic and applied problems in the informatics industry...especially the establishment of complex , interdisciplinary problems and directions), the change in the style of the scientific thought of the epoch, and

  6. CILogon-HA. Higher Assurance Federated Identities for DOE Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basney, James

    The CILogon-HA project extended the existing open source CILogon service (initially developed with funding from the National Science Foundation) to provide credentials at multiple levels of assurance to users of DOE facilities for collaborative science. CILogon translates mechanism and policy across higher education and grid trust federations, bridging from the InCommon identity federation (which federates university and DOE lab identities) to the Interoperable Global Trust Federation (which defines standards across the Worldwide LHC Computing Grid, the Open Science Grid, and other cyberinfrastructure). The CILogon-HA project expanded the CILogon service to support over 160 identity providers (including 6 DOE facilities) andmore » 3 internationally accredited certification authorities. To provide continuity of operations upon the end of the CILogon-HA project period, project staff transitioned the CILogon service to operation by XSEDE.« less

  7. Aeronautical Engineering: A continuing bibliography with indexes (supplement 175)

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This bibliography lists 467 reports, articles and other documents introduced into the NASA scientific and technical information system in May 1984. Topics cover varied aspects of aeronautical engineering, geoscience, physics, astronomy, computer science, and support facilities.

  8. Basic energy sciences: Summary of accomplishments

    NASA Astrophysics Data System (ADS)

    1990-05-01

    For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.

  9. Basic Energy Sciences: Summary of Accomplishments

    DOE R&D Accomplishments Database

    1990-05-01

    For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy-related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user'' facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.

  10. Remote Internet access to advanced analytical facilities: a new approach with Web-based services.

    PubMed

    Sherry, N; Qin, J; Fuller, M Suominen; Xie, Y; Mola, O; Bauer, M; McIntyre, N S; Maxwell, D; Liu, D; Matias, E; Armstrong, C

    2012-09-04

    Over the past decade, the increasing availability of the World Wide Web has held out the possibility that the efficiency of scientific measurements could be enhanced in cases where experiments were being conducted at distant facilities. Examples of early successes have included X-ray diffraction (XRD) experimental measurements of protein crystal structures at synchrotrons and access to scanning electron microscopy (SEM) and NMR facilities by users from institutions that do not possess such advanced capabilities. Experimental control, visual contact, and receipt of results has used some form of X forwarding and/or VNC (virtual network computing) software that transfers the screen image of a server at the experimental site to that of the users' home site. A more recent development is a web services platform called Science Studio that provides teams of scientists with secure links to experiments at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens to operate, observe, and record essential parts of the experiment. As well, Science Studio provides high speed network access to computing resources to process the large data sets that are often involved in complex experiments. The simple web browser and the rapid transfer of experimental data to a processing site allow efficient use of the facility and assist decision making during the acquisition of the experimental results. The software provides users with a comprehensive overview and record of all parts of the experimental process. A prototype network is described involving X-ray beamlines at two different synchrotrons and an SEM facility. An online parallel processing facility has been developed that analyzes the data in near-real time using stream processing. Science Studio and can be expanded to include many other analytical applications, providing teams of users with rapid access to processed results along with the means for detailed discussion of their significance.

  11. MIT Laboratory for Computer Science Progress Report, July 1984-June 1985

    DTIC Science & Technology

    1985-06-01

    larger (up to several thousand machines) multiprocessor systems. This facility, funded by the newly formed Strategic Computing Program of the Defense...Szolovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital J. Dzierzanowski, Ph.D., Dept...COMPUTATION STRUCTURES Academic Staff J. B. Dennis, Group Leader Research Staff W. B. Ackerman G. A. Boughton W. Y-P. Lim Graduate Students T-A. Chu S

  12. Unique life sciences research facilities at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Mulenburg, G. M.; Vasques, M.; Caldwell, W. F.; Tucker, J.

    1994-01-01

    The Life Science Division at NASA's Ames Research Center has a suite of specialized facilities that enable scientists to study the effects of gravity on living systems. This paper describes some of these facilities and their use in research. Seven centrifuges, each with its own unique abilities, allow testing of a variety of parameters on test subjects ranging from single cells through hardware to humans. The Vestibular Research Facility allows the study of both centrifugation and linear acceleration on animals and humans. The Biocomputation Center uses computers for 3D reconstruction of physiological systems, and interactive research tools for virtual reality modeling. Psycophysiological, cardiovascular, exercise physiology, and biomechanical studies are conducted in the 12 bed Human Research Facility and samples are analyzed in the certified Central Clinical Laboratory and other laboratories at Ames. Human bedrest, water immersion and lower body negative pressure equipment are also available to study physiological changes associated with weightlessness. These and other weightlessness models are used in specialized laboratories for the study of basic physiological mechanisms, metabolism and cell biology. Visual-motor performance, perception, and adaptation are studied using ground-based models as well as short term weightlessness experiments (parabolic flights). The unique combination of Life Science research facilities, laboratories, and equipment at Ames Research Center are described in detail in relation to their research contributions.

  13. Computational Accelerator Physics. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisognano, J.J.; Mondelli, A.A.

    1997-04-01

    The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less

  14. 2009 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P.; Martin, D.; Drugan, C.

    2010-11-23

    This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less

  15. Laboratory directed research and development program FY 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Todd; Levy, Karin

    2000-03-08

    The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness. This is the annual report on Laboratory Directed Research and Development (LDRD) program for FY99.« less

  16. Jackson State University's Center for Spatial Data Research and Applications: New facilities and new paradigms

    NASA Technical Reports Server (NTRS)

    Davis, Bruce E.; Elliot, Gregory

    1989-01-01

    Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.

  17. On the Reaction Mechanism of Acetaldehyde Decomposition on Mo(110)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai; Karim, Ayman M.; Wang, Yong

    2012-02-16

    The strong Mo-O bond strength provides promising reactivity of Mo-based catalysts for the deoxygenation of biomass-derived oxygenates. Combining the novel dimer saddle point searching method with periodic spin-polarized density functional theory calculations, we investigated the reaction pathways of a acetaldehyde decomposition on the clean Mo(110) surface. Two reaction pathways were identified, a selective deoxygenation and a nonselective fragmentation pathways. We found that acetaldehyde preferentially adsorbs at the pseudo 3-fold hollow site in the η2(C,O) configuration on Mo(110). Among four possible bond (β-C-H, γ-C-H, C-O and C-C) cleavages, the initial decomposition of the adsorbed acetaldehyde produces either ethylidene via the C-Omore » bond scission or acetyl via the β-C-H bond scission while the C-C and the γ-C-H bond cleavages of acetaldehyde leading to the formation of methyl (and formyl) and formylmethyl are unlikely. Further dehydrogenations of ethylidene into either ethylidyne or vinyl are competing and very facile with low activation barriers of 0.24 and 0.31 eV, respectively. Concurrently, the formed acetyl would deoxygenate into ethylidyne via the C-O cleavage rather than breaking the C-C or the C-H bonds. The selective deoxygenation of acetaldehyde forming ethylene is inhibited by relatively weaker hydrogenation capability of the Mo(110) surface. Instead, the nonselective pathway via vinyl and vinylidene dehydrogenations to ethynyl as the final hydrocarbon fragment is kinetically favorable. On the other hand, the strong interaction between ethylene and the Mo(110) surface also leads to ethylene decomposition instead of desorption into the gas phase. This work was financially supported by the National Advanced Biofuels Consortium (NABC). Computing time was granted by a user project (emsl42292) at the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). This work was financially supported by the National Advanced Biofuels Consortium (NABC). Computing time was granted by a user project (emsl42292) at the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). The EMSL is a U.S. Department of Energy (DOE) national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and supported by the DOE Office of Biological and Environmental Research. Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy.« less

  18. Student science enrichment training program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, S.S.

    1994-08-01

    This is a report on the Student Science Enrichment Training Program, with special emphasis on chemical and computer science fields. The residential summer session was held at the campus of Claflin College, Orangeburg, SC, for six weeks during 1993 summer, to run concomitantly with the college`s summer school. Fifty participants selected for this program, included high school sophomores, juniors and seniors. The students came from rural South Carolina and adjoining states which, presently, have limited science and computer science facilities. The program focused on high ability minority students, with high potential for science engineering and mathematical careers. The major objectivemore » was to increase the pool of well qualified college entering minority students who would elect to go into science, engineering and mathematical careers. The Division of Natural Sciences and Mathematics and engineering at Claflin College received major benefits from this program as it helped them to expand the Departments of Chemistry, Engineering, Mathematics and Computer Science as a result of additional enrollment. It also established an expanded pool of well qualified minority science and mathematics graduates, which were recruited by the federal agencies and private corporations, visiting Claflin College Campus. Department of Energy`s relationship with Claflin College increased the public awareness of energy related job opportunities in the public and private sectors.« less

  19. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  20. Preface: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Stevens, Rick

    2008-07-01

    The fourth annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held June 13-18, 2008, in Seattle, Washington. The SciDAC conference series is the premier communitywide venue for presentation of results from the DOE Office of Science's interdisciplinary computational science program. Started in 2001 and renewed in 2006, the DOE SciDAC program is the country's - and arguably the world's - most significant interdisciplinary research program supporting the development of advanced scientific computing methods and their application to fundamental and applied areas of science. SciDAC supports computational science across many disciplines, including astrophysics, biology, chemistry, fusion sciences, and nuclear physics. Moreover, the program actively encourages the creation of long-term partnerships among scientists focused on challenging problems and computer scientists and applied mathematicians developing the technology and tools needed to address those problems. The SciDAC program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and analyze the growing amount of data generated by experiments. To help further the research community's ability to tap into the capabilities of current and future supercomputers, Under Secretary for Science, Raymond Orbach, launched the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program in 2003. The INCITE program was conceived specifically to seek out computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. The program encourages proposals from universities, other research institutions, and industry. During the first two years of the INCITE program, 10 percent of the resources at NERSC were allocated to INCITE awardees. However, demand for supercomputing resources far exceeded available systems; and in 2003, the Office of Science identified increasing computing capability by a factor of 100 as the second priority on its Facilities of the Future list. The goal was to establish leadership-class computing resources to support open science. As a result of a peer reviewed competition, the first leadership computing facility was established at Oak Ridge National Laboratory in 2004. A second leadership computing facility was established at Argonne National Laboratory in 2006. This expansion of computational resources led to a corresponding expansion of the INCITE program. In 2008, Argonne, Lawrence Berkeley, Oak Ridge, and Pacific Northwest national laboratories all provided resources for INCITE. By awarding large blocks of computer time on the DOE leadership computing facilities, the INCITE program enables the largest-scale computations to be pursued. In 2009, INCITE will award over half a billion node-hours of time. The SciDAC conference celebrates progress in advancing science through large-scale modeling and simulation. Over 350 participants attended this year's talks, poster sessions, and tutorials, spanning the disciplines supported by DOE. While the principal focus was on SciDAC accomplishments, this year's conference also included invited presentations and posters from DOE INCITE awardees. Another new feature in the SciDAC conference series was an electronic theater and video poster session, which provided an opportunity for the community to see over 50 scientific visualizations in a venue equipped with many high-resolution large-format displays. To highlight the growing international interest in petascale computing, this year's SciDAC conference included a keynote presentation by Herman Lederer from the Max Planck Institut, one of the leaders of DEISA (Distributed European Infrastructure for Supercomputing Applications) project and a member of the PRACE consortium, Europe's main petascale project. We also heard excellent talks from several European groups, including Laurent Gicquel of CERFACS, who spoke on `Large-Eddy Simulations of Turbulent Reacting Flows of Real Burners: Status and Challenges', and Jean-Francois Hamelin from EDF, who presented a talk on `Getting Ready for Petaflop Capacities and Beyond: A Utility Perspective'. Two other compelling addresses gave attendees a glimpse into the future. Tomas Diaz de la Rubia of Lawrence Livermore National Laboratory spoke on a vision for a fusion/fission hybrid reactor known as the `LIFE Engine' and discussed some of the materials and modeling challenges that need to be overcome to realize the vision for a 1000-year greenhouse-gas-free power source. Dan Reed from Microsoft gave a capstone talk on the convergence of technology, architecture, and infrastructure for cloud computing, data-intensive computing, and exascale computing (1018 flops/sec). High-performance computing is making rapid strides. The SciDAC community's computational resources are expanding dramatically. In the summer of 2008 the first general purpose petascale system (IBM Cell-based RoadRunner at Los Alamos National Laboratory) was recognized in the top 500 list of fastest machines heralding in the dawning of the petascale era. The DOE's leadership computing facility at Argonne reached number three on the Top 500 and is at the moment the most capable open science machine based on an IBM BG/P system with a peak performance of over 550 teraflops/sec. Later this year Oak Ridge is expected to deploy a 1 petaflops/sec Cray XT system. And even before the scientific community has had an opportunity to make significant use of petascale systems, the computer science research community is forging ahead with ideas and strategies for development of systems that may by the end of the next decade sustain exascale performance. Several talks addressed barriers to, and strategies for, achieving exascale capabilities. The last day of the conference was devoted to tutorials hosted by Microsoft Research at a new conference facility in Redmond, Washington. Over 90 people attended the tutorials, which covered topics ranging from an introduction to BG/P programming to advanced numerical libraries. The SciDAC and INCITE programs and the DOE Office of Advanced Scientific Computing Research core program investments in applied mathematics, computer science, and computational and networking facilities provide a nearly optimum framework for advancing computational science for DOE's Office of Science. At a broader level this framework also is benefiting the entire American scientific enterprise. As we look forward, it is clear that computational approaches will play an increasingly significant role in addressing challenging problems in basic science, energy, and environmental research. It takes many people to organize and support the SciDAC conference, and I would like to thank as many of them as possible. The backbone of the conference is the technical program; and the task of selecting, vetting, and recruiting speakers is the job of the organizing committee. I thank the members of this committee for all the hard work and the many tens of conference calls that enabled a wonderful program to be assembled. This year the following people served on the organizing committee: Jim Ahrens, LANL; David Bader, LLNL; Bryan Barnett, Microsoft; Peter Beckman, ANL; Vincent Chan, GA; Jackie Chen, SNL; Lori Diachin, LLNL; Dan Fay, Microsoft; Ian Foster, ANL; Mark Gordon, Ames; Mohammad Khaleel, PNNL; David Keyes, Columbia University; Bob Lucas, University of Southern California; Tony Mezzacappa, ORNL; Jeff Nichols, ORNL; David Nowak, ANL; Michael Papka, ANL; Thomas Schultess, ORNL; Horst Simon, LBNL; David Skinner, LBNL; Panagiotis Spentzouris, Fermilab; Bob Sugar, UCSB; and Kathy Yelick, LBNL. I owe a special thanks to Mike Papka and Jim Ahrens for handling the electronic theater. I also thank all those who submitted videos. It was a highly successful experiment. Behind the scenes an enormous amount of work is required to make a large conference go smoothly. First I thank Cheryl Zidel for her tireless efforts as organizing committee liaison and posters chair and, in general, handling all of my end of the program and keeping me calm. I also thank Gail Pieper for her work in editing the proceedings, Beth Cerny Patino for her work on the Organizing Committee website and electronic theater, and Ken Raffenetti for his work in keeping that website working. Jon Bashor and John Hules did an excellent job in handling conference communications. I thank Caitlin Youngquist for the striking graphic design; Dan Fay for tutorials arrangements; and Lynn Dory, Suzanne Stevenson, Sarah Pebelske and Sarah Zidel for on-site registration and conference support. We all owe Yeen Mankin an extra-special thanks for choosing the hotel, handling contracts, arranging menus, securing venues, and reassuring the chair that everything was under control. We are pleased to have obtained corporate sponsorship from Cray, IBM, Intel, HP, and SiCortex. I thank all the speakers and panel presenters. I also thank the former conference chairs Tony Metzzacappa, Bill Tang, and David Keyes, who were never far away for advice and encouragement. Finally, I offer my thanks to Michael Strayer, without whose leadership, vision, and persistence the SciDAC program would not have come into being and flourished. I am honored to be part of his program and his friend. Rick Stevens Seattle, Washington July 18, 2008

  1. The growth of the UniTree mass storage system at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen

    1993-01-01

    In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.

  2. The Kinetics of Evolution of Water Vapor Clusters in Air

    DTIC Science & Technology

    1975-12-01

    Academy Annapnlis, Mazylsnd 21402 D IUP 17% Work Supported by: Power Branch and Atmospheric Sciences Program, Office of Naval Research and Naval Air...to experiments in supersonic nozzles. The patient support of the Power Branch and the Atmospheric Sciences Program, Office of Naval Research over...the start by relying on the dioital compxiter from the start of development. Time- shared computer facilities were provided by the Naval Weapons Lab

  3. JPRS report: Science and Technology. Europe and Latin America

    NASA Astrophysics Data System (ADS)

    1988-01-01

    Articles from the popular and trade press are included on the following subjects: advanced materials, aerospace industry, automotive industry, biotechnology, computers, factory automation and robotics, microelectronics, and science and technology policy. The aerospace articles discuss briefly and in a nontechnical way the SAGEM bubble memories for space applications, Ariane V new testing facilities, innovative technologies of TDF-1 satellite, and the restructuring of the Aviation Division at France's Aerospatiale.

  4. Capabilities: Science Pillars

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Science Briefs

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  6. Office of Science

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  7. Bradbury Science Museum

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  8. Sandia National Laboratories: National Security Missions: Nuclear Weapons

    Science.gov Websites

    Technology Partnerships Business, Industry, & Non-Profits Government Universities Center for Development Agreement (CRADA) Strategic Partnership Projects, Non-Federal Entity (SPP/NFE) Agreements New , in which fundamental science, computer models, and unique experimental facilities come together so

  9. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  10. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  11. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  12. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  13. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolic, R J

    This month's issue has the following articles: (1) Dawn of a New Era of Scientific Discovery - Commentary by Edward I. Moses; (2) At the Frontiers of Fundamental Science Research - Collaborators from national laboratories, universities, and international organizations are using the National Ignition Facility to probe key fundamental science questions; (3) Livermore Responds to Crisis in Post-Earthquake Japan - More than 70 Laboratory scientists provided round-the-clock expertise in radionuclide analysis and atmospheric dispersion modeling as part of the nation's support to Japan following the March 2011 earthquake and nuclear accident; (4) A Comprehensive Resource for Modeling, Simulation, and Experimentsmore » - A new Web-based resource called MIDAS is a central repository for material properties, experimental data, and computer models; and (5) Finding Data Needles in Gigabit Haystacks - Livermore computer scientists have developed a novel computer architecture based on 'persistent' memory to ease data-intensive computations.« less

  15. National Synchrotron Light Source annual report 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulbert, S.L.; Lazarz, N.M.

    1992-04-01

    This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less

  16. Next Generation Cloud-based Science Data Systems and Their Implications on Data and Software Stewardship, Preservation, and Provenance

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.

    2017-12-01

    NASA's upcoming missions are expected to be generating data volumes at least an order of magnitude larger than current missions. A significant increase in data processing, data rates, data volumes, and long-term data archive capabilities are needed. Consequently, new challenges are emerging that impact traditional data and software management approaches. At large-scales, next generation science data systems are exploring the move onto cloud computing paradigms to support these increased needs. New implications such as costs, data movement, collocation of data systems & archives, and moving processing closer to the data, may result in changes to the stewardship, preservation, and provenance of science data and software. With more science data systems being on-boarding onto cloud computing facilities, we can expect more Earth science data records to be both generated and kept in the cloud. But at large scales, the cost of processing and storing global data may impact architectural and system designs. Data systems will trade the cost of keeping data in the cloud with the data life-cycle approaches of moving "colder" data back to traditional on-premise facilities. How will this impact data citation and processing software stewardship? What are the impacts of cloud-based on-demand processing and its affect on reproducibility and provenance. Similarly, with more science processing software being moved onto cloud, virtual machines, and container based approaches, more opportunities arise for improved stewardship and preservation. But will the science community trust data reprocessed years or decades later? We will also explore emerging questions of the stewardship of the science data system software that is generating the science data records both during and after the life of mission.

  17. Intelligent Monitoring of Rocket Test Systems

    NASA Technical Reports Server (NTRS)

    Duran, Esteban; Rocha, Stephanie; Figueroa, Fernando

    2016-01-01

    Stephanie Rocha is an undergraduate student pursuing a degree in Mechanical Engineering. Esteban Duran is pursuing a degree in Computer Science. Our mentor is Fernando Figueroa. Our project involved developing Intelligent Health Monitoring at the High Pressure Gas Facility (HPGF) utilizing the software GensymG2.

  18. Frontiers in Science Lectures

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, Charles; Bell, Greg; Canon, Shane

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SCmore » organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.« less

  20. ALCF Data Science Program: Productive Data-centric Supercomputing

    NASA Astrophysics Data System (ADS)

    Romero, Nichols; Vishwanath, Venkatram

    The ALCF Data Science Program (ADSP) is targeted at big data science problems that require leadership computing resources. The goal of the program is to explore and improve a variety of computational methods that will enable data-driven discoveries across all scientific disciplines. The projects will focus on data science techniques covering a wide area of discovery including but not limited to uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows. Project teams will be among the first to access Theta, ALCFs forthcoming 8.5 petaflops Intel/Cray system. The program will transition to the 200 petaflop/s Aurora supercomputing system when it becomes available. In 2016, four projects have been selected to kick off the ADSP. The selected projects span experimental and computational sciences and range from modeling the brain to discovering new materials for solar-powered windows to simulating collision events at the Large Hadron Collider (LHC). The program will have a regular call for proposals with the next call expected in Spring 2017.http://www.alcf.anl.gov/alcf-data-science-program This research used resources of the ALCF, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

  1. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  2. Science and Innovation at Los Alamos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. STAF: A Powerful and Sophisticated CAI System.

    ERIC Educational Resources Information Center

    Loach, Ken

    1982-01-01

    Describes the STAF (Science Teacher's Authoring Facility) computer-assisted instruction system developed at Leeds University (England), focusing on STAF language and major program features. Although programs for the system emphasize physical chemistry and organic spectroscopy, the system and language are general purpose and can be used in any…

  4. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.« less

  5. Multi-year Content Analysis of User Facility Related Publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Robert M; Stahl, Christopher G; Hines, Jayson

    2013-01-01

    Scientific user facilities provide resources and support that enable scientists to conduct experiments or simulations pertinent to their respective research. Consequently, it is critical to have an informed understanding of the impact and contributions that these facilities have on scientific discoveries. Leveraging insight into scientific publications that acknowledge the use of these facilities enables more informed decisions by facility management and sponsors in regard to policy, resource allocation, and influencing the direction of science as well as more effectively understand the impact of a scientific user facility. This work discusses preliminary results of mining scientific publications that utilized resources atmore » the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL). These results show promise in identifying and leveraging multi-year trends and providing a higher resolution view of the impact that a scientific user facility may have on scientific discoveries.« less

  6. Laboratory for Computer Science Progress Report 21, July 1983-June 1984.

    DTIC Science & Technology

    1984-06-01

    Systems 269 4. Distributed Consensus 270 5. Election of a Leader in a Distributed Ring of Processors 273 6. Distributed Network Algorithms 274 7. Diagnosis...multiprocessor systems. This facility, funded by the new!y formed Strategic Computing Program of the Defense Advanced Research Projects Agency, will enable...Academic Staff P. Szo)ovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital R

  7. Introduction to the Space Physics Analysis Network (SPAN)

    NASA Technical Reports Server (NTRS)

    Green, J. L. (Editor); Peters, D. J. (Editor)

    1985-01-01

    The Space Physics Analysis Network or SPAN is emerging as a viable method for solving an immediate communication problem for the space scientist. SPAN provides low-rate communication capability with co-investigators and colleagues, and access to space science data bases and computational facilities. The SPAN utilizes up-to-date hardware and software for computer-to-computer communications allowing binary file transfer and remote log-on capability to over 25 nationwide space science computer systems. SPAN is not discipline or mission dependent with participation from scientists in such fields as magnetospheric, ionospheric, planetary, and solar physics. Basic information on the network and its use are provided. It is anticipated that SPAN will grow rapidly over the next few years, not only from the standpoint of more network nodes, but as scientists become more proficient in the use of telescience, more capability will be needed to satisfy the demands.

  8. Networking Cyberinfrastructure Resources to Support Global, Cross-disciplinary Science

    NASA Astrophysics Data System (ADS)

    Lehnert, K.; Ramamurthy, M. K.

    2016-12-01

    Geosciences are globally connected by nature and the grand challenge problems like climate change, ocean circulations, seasonal predictions, impact of volcanic eruptions, etc. all transcend both disciplinary and geographic boundaries, requiring cross-disciplinary and international partnerships. Cross-disciplinary and international collaborations are also needed to unleash the power of cyber- (or e-) infrastructure (CI) by networking globally distributed, multi-disciplinary data, software, and computing resources to accelerate new scientific insights and discoveries. While the promises of a global and cross-disciplinary CI are exhilarating and real, a range of technical, organizational, and social challenges needs to be overcome in order to achieve alignment and linking of operational data systems, software tools, and computing facilities. New modes of collaboration require agreement on and governance of technical standards and best practices, and funding for necessary modifications. This presentation will contribute the perspective of domain-specific data facilities to the discussion of cross-disciplinary and international collaboration in CI development and deployment, in particular those of IEDA (Interdisciplinary Earth Data Alliance) serving the solid Earth sciences and Unidata serving atmospheric sciences. Both facilities are closely involved with the US NSF EarthCube program that aims to network and augment existing Geoscience CI capabilities "to make disciplinary boundaries permeable, nurture and facilitate knowledge sharing, …, and enhance collaborative pursuit of cross-disciplinary research" (EarthCube Strategic Vision), while also collaborating internationally to network domain-specific and cross-disciplinary CI resources. These collaborations are driven by the substantial benefits to the science community, but create challenges, when operational and funding constraints need to be balanced with adjustments to new joint data curation practices and interoperability standards.

  9. A SLAM II simulation model for analyzing space station mission processing requirements

    NASA Technical Reports Server (NTRS)

    Linton, D. G.

    1985-01-01

    Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.

  10. Proceedings of the nineteenth LAMPF Users Group meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradbury, J.N.

    1986-02-01

    Separate abstracts were prepared for eight invited talks on various aspects of nuclear and particle physics as well as status reports on LAMPF and discussions of upgrade options. Also included in these proceedings are the minutes of the working groups for: energetic pion channel and spectrometer; high resolution spectrometer; high energy pion channel; neutron facilities; low-energy pion work; nucleon physics laboratory; stopped muon physics; solid state physics and material science; nuclear chemistry; and computing facilities. Recent LAMPF proposals are also briefly summarized. (LEW)

  11. National Synchrotron Light Source annual report 1991. Volume 1, October 1, 1990--September 30, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulbert, S.L.; Lazarz, N.M.

    1992-04-01

    This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less

  12. Annual Report and Abstracts of Research, July 1977-June 1978.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Dept. of Computer and Information Science.

    This annual report of the Department of Computer and Information Science at Ohio State University for July 1977-June 1978 covers the department's organizational structure, objectives, highlights of department activities (such as grants and faculty appointments), instructional programs/course offerings, and facilities. In the second half of the…

  13. Opening Comments: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2008-07-01

    Welcome to Seattle and the 2008 SciDAC Conference. This conference, the fourth in the series, is a continuation of the PI meetings we first began under SciDAC-1. I would like to start by thanking the organizing committee, and Rick Stevens in particular, for organizing this year's meeting. This morning I would like to look briefly at SciDAC, to give you a brief history of SciDAC and also look ahead to see where we plan to go over the next few years. I think the best description of SciDAC, at least the simulation part, comes from a quote from Dr Ray Orbach, DOE's Under Secretary for Science and Director of the Office of Science. In an interview that appeared in the SciDAC Review magazine, Dr Orbach said, `SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing'. Of course, that is extended not just to physical scientists, but also to biological scientists. This is a theme of computational science, this partnership among disciplines, which goes all the way back to the early 1980s and Ken Wilson. It's a unique thread within the Department of Energy. SciDAC-1, launched around the turn of the millennium, created a new generation of scientific simulation codes. It advocated building out mathematical and computing system software in support of science and a new collaboratory software environment for data. The original concept for SciDAC-1 had topical centers for the execution of the various science codes, but several corrections and adjustments were needed. The ASCR scientific computing infrastructure was also upgraded, providing the hardware facilities for the program. The computing facility that we had at that time was the big 3 teraflop/s center at NERSC and that had to be shared with the programmatic side supporting research across DOE. At the time, ESnet was just slightly over half a gig per sec of bandwidth; and the science being addressed was accelerator science, climate, chemistry, fusion, astrophysics, materials science, and QCD. We built out the national collaboratories from the ASCR office, and in addition we built Integrated Software Infrastructure Centers (ISICs). Of these, three were in applied mathematics, four in computer science (including a performance evaluation research center), and four were collaboratories or Grid projects having to do with data management. For science, there were remarkable breakthroughs in simulation, such as full 3D laboratory scale flame simulation. There were also significant improvements in application codes - from factors of almost 3 to more than 100 - and code improvement as people began to realize they had to integrate mathematics tools and computer science tools into their codes to take advantage of the parallelism of the day. The SciDAC data-mining tool, Sapphire, received a 2006 R&D 100 award. And the community as a whole worked well together and began building a publication record that was substantial. In 2006, we recompeted the program with similar goals - SciDAC-1 was very successful, and we wanted to continue that success and extend what was happening under SciDAC to the broader science community. We opened up the partnership to all of the Offices of Science and the NSF and the NNSA. The goal was to create comprehensive scientific computing software and the infrastructure for the software to enable scientific discovery in the physical, biological, and environmental sciences and take the simulations to an extreme scale, in this case petascale. We would also build out a new generation of data management tools. What we observed during SciDAC-1 was that the data and the data communities - both experimental data from large experimental facilities and observational data, along with simulation data - were expanding at a rate significantly faster than Moore's law. In the past few weeks, the FastBit indexing technology software tool for data analyses and data mining developed under SciDAC's Scientific Data Management project was recognized with an R&D 100 Award, selected by an independent judging panel and editors of R&D Magazine as one of the 100 most technologically significant products introduced into the marketplace over the past year. For SciDAC-2 we had nearly 250 proposals requesting a total of slightly over 1 billion in funding. Of course, we had nowhere near 1 billion. The facilities and the science we ended up with were not significantly different from what we had in SciDAC-1. But we had put in place substantially increased facilities for science. When SciDAC-1 was originally executed with the facilities at NERSC, there was significant impact on the resources at NERSC, because not only did we have an expanding portfolio of programmatic science, but we had the SciDAC projects that also needed to run at NERSC. Suddenly, NERSC was incredibly oversubscribed. With SciDAC-2, we had in place leadership-class computing facilities at Argonne with slightly more than half a petaflop and at Oak Ridge with slightly more than a quarter petaflop with an upgrade planned at the end of this year for a petaflop. And we increased the production computing capacity at NERSC to 104 teraflop/s just so that we would not impact the programmatic research and so that we would have a startup facility for SciDAC. At the end of the summer, NERSC will be at 360 teraflop/s. Both the Oak Ridge system and the principal resource at NERSC are Cray systems; Argonne has a different architecture, an IBM Blue Gene/P. At the same time, ESnet has been built out, and we are on a path where we will have dual rings around the country, from 10 to 40 gigabits per second - a factor of 20 to 80 over what was available during SciDAC-1. The science areas include accelerator science and simulation, astrophysics, climate modeling and simulation, computational biology, fusion science, high-energy physics, petabyte high-energy/ nuclear physics, materials science and chemistry, nuclear physics, QCD, radiation transport, turbulence, and groundwater reactive transport modeling and simulation. They were supported by new enabling technology centers and university-based institutes to develop an educational thread for the SciDAC program. There were four mathematics projects and four computer science projects; and under data management, we see a significant difference in that we are bringing up new visualization projects to support and sustain data-intensive science. When we look at the budgets, we see growth in the budget from just under 60 million for SciDAC-1 to just over 80 for SciDAC-2. Part of the growth is due to bringing in NSF and NNSA as new partners, and some of the growth is due to some program offices increasing their investment in SciDAC, while other program offices are constant or have decreased their investment. This is not a reflection of their priorities per se but, rather, a reflection of the budget process and the difficult times in Washington during the past two years. New activities are under way in SciDAC - the annual PI meeting has turned into what I would describe as the premier interdisciplinary computational science meeting, one of the best in the world. Doing interdisciplinary meetings is difficult because people tend to develop a focus for their particular subject area. But this is the fourth in the series; and since the first meeting in San Francisco, these conferences have been remarkably successful. For SciDAC-2 we also created an outreach magazine, SciDAC Review, which highlights scientific discovery as well as high-performance computing. It's been very successful in telling the non-practitioners what SciDAC and computational science are all about. The other new instrument in SciDAC-2 is an outreach center. As we go from computing at the terascale to computing at the petascale, we face the problem of narrowing our research community. The number of people who are `literate' enough to compute at the terascale is more than the number of those who can compute at the petascale. To address this problem, we established the SciDAC Outreach Center to bring people into the fold and educate them as to how we do SciDAC, how the teams are composed, and what it really means to compute at scale. The resources I have mentioned don't come for free. As part of the HECRTF law of 2005, Congress mandated that the Secretary would ensure that leadership-class facilities would be open to everyone across all agencies. So we took Congress at its word, and INCITE is our instrument for making allocations at the leadership-class facilities at Argonne and Oak Ridge, as well as smaller allocations at NERSC. Therefore, the selected proposals are very large projects that are computationally intensive, that compute at scale, and that have a high science impact. An important feature is that INCITE is completely open to anyone - there is no requirement of DOE Office of Science funding, and proposals are rigorously reviewed for both the science and the computational readiness. In 2008, more than 100 proposals were received, requesting about 600 million processor-hours. We allocated just over a quarter of a billion processor-hours. Astrophysics, materials science, lattice gauge theory, and high energy and nuclear physics were the major areas. These were the teams that were computationally ready for the big machines and that had significant science they could identify. In 2009, there will be a significant increase amount of time to be allocated, over half a billion processor-hours. The deadline is August 11 for new proposals and September 12 for renewals. We anticipate a significant increase in the number of requests this year. We expect you - as successful SciDAC centers, institutes, or partnerships - to compete for and win INCITE program allocation awards. If you have a successful SciDAC proposal, we believe it will make you successful in the INCITE review. We have the expectation that you will among those most prepared and most ready to use the machines and to compute at scale. Over the past 18 months, we have assembled a team to look across our computational science portfolio and to judge what are the 10 most significant science accomplishments. The ASCR office, as it goes forward with OMB, the new administration, and Congress, will be judged by the science we have accomplished. All of our proposals - such as for increasing SciDAC, increasing applied mathematics, and so on - are tied to what have we accomplished in science. And so these 10 big accomplishments are key to establishing credibility for new budget requests. Tony Mezzacappa, who chaired the committee, will also give a presentation on the ranking of these top 10, how they got there, and what the science is all about. Here is the list - numbers 2, 5, 6, 7, 9, and 10 are all SciDAC projects. RankTitle 1Modeling the Molecular Basis of Parkinson's Disease (Tsigelny) 2Discovery of the Standing Accretion Shock Instability and Pulsar Birth Mechanism in a Core-Collapse Supernova Evolution and Explosion (Blondin) 3Prediction and Design of Macromolecular Structures and Functions (Baker) 4Understanding How Lifted Flame Stabilized in a Hot Coflow (Yoo) 5New Insights from LCF-enabled Advanced Kinetic Simulations of Global Turbulence in Fusion Systems (Tang) 6High Transition Temperature Superconductivity: A High-Temperature Superconductive State and a Pairing Mechanism in 2-D Hubbard Model (Scalapino) 7 PETsc: Providing the Solvers for DOE High-Performance Simulations (Smith) 8 Via Lactea II, A Billion Particle Simulation of the Dark Matter Halo of the Milky Way (Madau) 9Probing the Properties of Water through Advanced Computing (Galli) 10First Provably Scalable Maxwell Solver Enables Scalable Electromagnetic Simulations (Kovel) So, what's the future going to look like for us? The office is putting together an initiative with the community, which we call the E3 Initiative. We're looking for a 10-year horizon for what's going to happen. Through the series of town hall meetings, which many of you participated in, we have produced a document on `Transforming Energy, the Environment and Science through simulations at the eXtreme Scale'; it can be found at http://www.science.doe.gov/ascr/ProgramDocuments/TownHall.pdf . We sometimes call it the Exascale initiative. Exascale computing is the gold-ring level of computing that seems just out of reach; but if we work hard and stretch, we just might be able to reach it. We envision that there will be a SciDAC-X, working at the extreme scale, with SciDAC teams that will perform and carry out science in the areas that will have a great societal impact, such as alternative fuels and transportation, combustion, climate, fusion science, high-energy physics, advanced fuel cycles, carbon management, and groundwater. We envision institutes for applied mathematics and computer science that probably will segue into algorithms because, at the extreme scale, we see the distinction between the applied math and the algorithm per se and its implementation in computer science as being inseparable. We envision an INCITE-X with multi-petaflop platforms, perhaps even exaflop computing resources. ESnet will be best in class - our 10-year plan calls for having 400 terabits per second capacity available in dual rings around the country, an enormously fast data communications network for moving large amounts of data. In looking at where we've been and where we are going, we can see that the gigaflops and teraflops era was a regime where we were following Moore's law through advances in clock speed. In the current regime, we're introducing massive parallelism, which I think is exemplified by Intel's announcement of their teraflop chip, where they envision more than a thousand cores on a chip. But in order to reach exascale, extrapolations talk about machines that require 100 megawatts of power in terms of current architectures. It's clearly going to require novel architectures, things we have perhaps not yet envisioned. It is of course an era of challenge. There will be an unpredictable evolution of hardware if we are to reach the exascale; and there will clearly be multilevel heterogeneous parallelism, including multilevel memory hierarchies. We have no idea right now as to the programming models needed to execute at such an extreme scale. We have been incredibly successful at the petascale - we know that already. Managing data and just getting communications to scale is an enormous challenge. And it's not just the extreme scaling. It's the rapid increase in complexity that represents the challenge. Let me end with a metaphor. In previous meetings we have talked about the road to petascale. Indeed, we have seen in hindsight that it was a road well traveled. But perhaps the road to exascale is not a road at all. Perhaps the metaphor will be akin to scaling the south face of K2. That's clearly not something all of us will be able to do, and probably computing at the exascale is not something all of us will do. But if we achieve that goal, perhaps the words of Emily Dickinson will best summarize where we will be. Perhaps in her words, looking backward and down, you will say: I climb the `Hill of Science' I view the landscape o'er; Such transcendental prospect I ne'er beheld before!

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, Reinhold C.

    This is the first formal progress report issued by the ORNL Life Sciences Division. It covers the period from February 1997 through December 1998, which has been critical in the formation of our new division. The legacy of 50 years of excellence in biological research at ORNL has been an important driver for everyone in the division to do their part so that this new research division can realize the potential it has to make seminal contributions to the life sciences for years to come. This reporting period is characterized by intense assessment and planning efforts. They included thorough scrutinymore » of our strengths and weaknesses, analyses of our situation with respect to comparative research organizations, and identification of major thrust areas leading to core research efforts that take advantage of our special facilities and expertise. Our goal is to develop significant research and development (R&D) programs in selected important areas to which we can make significant contributions by combining our distinctive expertise and resources in the biological sciences with those in the physical, engineering, and computational sciences. Significant facilities in mouse genomics, mass spectrometry, neutron science, bioanalytical technologies, and high performance computing are critical to the success of our programs. Research and development efforts in the division are organized in six sections. These cluster into two broad areas of R&D: systems biology and technology applications. The systems biology part of the division encompasses our core biological research programs. It includes the Mammalian Genetics and Development Section, the Biochemistry and Biophysics Section, and the Computational Biosciences Section. The technology applications part of the division encompasses the Assessment Technology Section, the Environmental Technology Section, and the Toxicology and Risk Analysis Section. These sections are the stewards of the division's core competencies. The common mission of the division is to advance science and technology to understand complex biological systems and their relationship with human health and the environment.« less

  15. Games and Simulations for Climate, Weather and Earth Science Education

    NASA Astrophysics Data System (ADS)

    Russell, R. M.; Clark, S.

    2015-12-01

    We will demonstrate several interactive, computer-based simulations, games, and other interactive multimedia. These resources were developed for weather, climate, atmospheric science, and related Earth system science education. The materials were created by the UCAR Center for Science Education. These materials have been disseminated via our web site (SciEd.ucar.edu), webinars, online courses, teacher workshops, and large touchscreen displays in weather and Sun-Earth connections exhibits in NCAR's Mesa Lab facility in Boulder, Colorado. Our group has also assembled a web-based list of similar resources, especially simulations and games, from other sources that touch upon weather, climate, and atmospheric science topics. We'll briefly demonstrate this directory.

  16. Promoting Pre-college Science Education

    NASA Astrophysics Data System (ADS)

    Taylor, P. L.; Lee, R. L.

    2000-10-01

    The Fusion Education Program, with continued support from DOE, has strengthened its interactions with educators in promoting pre-college science education for students. Projects aggressively pursued this year include an on-site, college credited, laboratory-based 10-day educator workshop on plasma and fusion science; completion of `Starpower', a fusion power plant simulation on interactive CD; expansion of scientist visits to classrooms; broadened participation in an internet-based science olympiad; and enhancements to the tours of the DIII-D Facility. In the workshop, twelve teachers used bench top devices to explore basic plasma physics. Also included were radiation experiments, computer aided drafting, techniques to integrate fusion science and technology in the classroom, and visits to a University Physics lab and the San Diego Supercomputer Center. Our ``Scientist in a Classroom'' program reached more than 2200 students at 20 schools. Our `Starpower' CD allows a range of interactive learning from the effects of electric and magnetic fields on charged particles to operation of a Tokamak-based power plant. Continuing tours of the DIII-D facility were attended by more than 800 students this past year.

  17. Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing

    NASA Astrophysics Data System (ADS)

    Woodcock, R.; Wyborn, L.

    2012-04-01

    Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying resolutions with integration and validation across data type boundaries. Increased capacity of storage and compute will mean that uncertainty and reliability of individual observations will consistently be taken into account and propagated throughout the processing chain. If these data access difficulties can be overcome, the increased compute capacity will also mean that larger scale, more complex models can be run at higher resolution and instead of single pass modelling runs. Ensembles of models will be able to be run to simultaneously test multiple hypotheses. Petascale computing and high performance data offer more than "bigger, faster": it is an opportunity for a transformative change in the way in which geoscience research is routinely conducted.

  18. Workforce Retention Study in Support of the U.S. Army Aberdeen Test Center Human Capital Management Strategy

    DTIC Science & Technology

    2016-09-01

    Sciences Group 6% 1550s Computer Scientists Group 5% Other 1500s ORSAa, Mathematics, & Statistics Group 3% 1600s Equipment & Facilities Group 4...Employee removal based on misconduct, delinquency , suitability, unsatisfactory performance, or failure to qualify for conversion to a career appointment...average of 10.4% in many areas, but over double the average for the 1550s (Computer Scientists) and other 1500s (ORSA, Mathematics, and Statistics ). Also

  19. IEDA: Making Small Data BIG Through Interdisciplinary Partnerships Among Long-tail Domains

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Carbotte, S. M.; Arko, R. A.; Ferrini, V. L.; Hsu, L.; Song, L.; Ghiorso, M. S.; Walker, D. J.

    2014-12-01

    The Big Data world in the Earth Sciences so far exists primarily for disciplines that generate massive volumes of observational or computed data using large-scale, shared instrumentation such as global sensor networks, satellites, or high-performance computing facilities. These data are typically managed and curated by well-supported community data facilities that also provide the tools for exploring the data through visualization or statistical analysis. In many other domains, especially those where data are primarily acquired by individual investigators or small teams (known as 'Long-tail data'), data are poorly shared and integrated, lacking a community-based data infrastructure that ensures persistent access, quality control, standardization, and integration of data, as well as appropriate tools to fully explore and mine the data within the context of broader Earth Science datasets. IEDA (Integrated Earth Data Applications, www.iedadata.org) is a data facility funded by the US NSF to develop and operate data services that support data stewardship throughout the full life cycle of observational data in the solid earth sciences, with a focus on the data management needs of individual researchers. IEDA builds on a strong foundation of mature disciplinary data systems for marine geology and geophysics, geochemistry, and geochronology. These systems have dramatically advanced data resources in those long-tail Earth science domains. IEDA has strengthened these resources by establishing a consolidated, enterprise-grade infrastructure that is shared by the domain-specific data systems, and implementing joint data curation and data publication services that follow community standards. In recent years, other domain-specific data efforts have partnered with IEDA to take advantage of this infrastructure and improve data services to their respective communities with formal data publication, long-term preservation of data holdings, and better sustainability. IEDA hopes to foster such partnerships with streamlined data services, including user-friendly, single-point interfaces for data submission, discovery, and access across the partner systems to support interdisciplinary science.

  20. NASA Tech Briefs, April 1995. Volume 19, No. 4

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This issue of the NASA Tech Briefs has a special focus section on video and imaging, a feature on the NASA invention of the year, and a resource report on the Dryden Flight Research Center. The issue also contains articles on electronic components and circuits, electronic systems, physical sciences, materials, computer programs, mechanics, machinery, manufacturing/fabrication, mathematics and information sciences and life sciences. In addition to the standard articles in the NASA Tech brief, this contains a supplement entitled "Laser Tech Briefs" which features an article on the National Ignition Facility, and other articles on the use of Lasers.

  1. ASC FY17 Implementation Plan, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, P. G.

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of thesemore » we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.« less

  3. LSST Resources for the Community

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne

    2011-01-01

    LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.

  4. Pacific Northwest National Laboratory Annual Site Environmental Report for Calendar Year 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, Joanne P.; Sackschewsky, Michael R.; Tilden, Harold T.

    2014-09-30

    Pacific Northwest National Laboratory (PNNL), one of the U.S. Department of Energy (DOE) Office of Science’s 10 national laboratories, provides innovative science and technology development in the areas of energy and the environment, fundamental and computational science, and national security. DOE’s Pacific Northwest Site Office (PNSO) is responsible for oversight of PNNL at its Campus in Richland, Washington, as well as its facilities in Sequim, Seattle, and North Bonneville, Washington, and Corvallis and Portland, Oregon.

  5. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2005-01-01

    Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations with industry and virtual prototyping. New instruments of collaboration will include institutes and centers while summer schools, workshops and outreach will invite new talent and expertise. Computational science adds new dimensions to science and its practice. Disciplines of fusion, accelerator science, and combustion are poised to blur the boundaries between pure and applied science. As we open the door into FY2006 we shall see a landscape of new scientific challenges: in biology, chemistry, materials, and astrophysics to name a few. The enabling technologies of SciDAC have been transformational as drivers of change. Planning for major new software systems assumes a base line employing Common Component Architectures and this has become a household word for new software projects. While grid algorithms and mesh refinement software have transformed applications software, data management and visualization have transformed our understanding of science from data. The Gordon Bell prize now seems to be dominated by computational science and solvers developed by TOPS ISIC. The priorities of the Office of Science in the Department of Energy are clear. The 20 year facilities plan is driven by new science. High performance computing is placed amongst the two highest priorities. Moore's law says that by the end of the next cycle of SciDAC we shall have peta-flop computers. The challenges of petascale computing are enormous. These and the associated computational science are the highest priorities for computing within the Office of Science. Our effort in Leadership Class computing is just a first step towards this goal. Clearly, computational science at this scale will face enormous challenges and possibilities. Performance evaluation and prediction will be critical to unraveling the needed software technologies. We must not lose sight of our overarching goal—that of scientific discovery. Science does not stand still and the landscape of science discovery and computing holds immense promise. In this environment, I believe it is necessary to institute a system of science based performance metrics to help quantify our progress towards science goals and scientific computing. As a final comment I would like to reaffirm that the shifting landscapes of science will force changes to our computational sciences, and leave you with the quote from Richard Hamming, 'The purpose of computing is insight, not numbers'.

  6. Public Outreach at RAL: Engaging the Next Generation of Scientists and Engineers

    NASA Astrophysics Data System (ADS)

    Corbett, G.; Ryall, G.; Palmer, S.; Collier, I. P.; Adams, J.; Appleyard, R.

    2015-12-01

    The Rutherford Appleton Laboratory (RAL) is part of the UK's Science and Technology Facilities Council (STFC). As part of the Royal Charter that established the STFC, the organisation is required to generate public awareness and encourage public engagement and dialogue in relation to the science undertaken. The staff at RAL firmly support this activity as it is important to encourage the next generation of students to consider studying Science, Technology, Engineering, and Mathematics (STEM) subjects, providing the UK with a highly skilled work-force in the future. To this end, the STFC undertakes a variety of outreach activities. This paper will describe the outreach activities undertaken by RAL, particularly focussing on those of the Scientific Computing Department (SCD). These activities include: an Arduino based activity day for 12-14 year-olds to celebrate Ada Lovelace day; running a centre as part of the Young Rewired State - encouraging 11-18 year-olds to create web applications with open data; sponsoring a team in the Engineering Education Scheme - supporting a small team of 16-17 year-olds to solve a real world engineering problem; as well as the more traditional tours of facilities. These activities could serve as an example for other sites involved in scientific computing around the globe.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, Jill; Corones, James; Batchelor, Donald

    Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less

  8. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, M.; Messina, P.; Coffey, R.

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursormore » to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.« less

  9. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor thatmore » uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where appropriate, changes in Center metrics were introduced. This report covers CY 2010 and CY 2011 Year to Date (YTD) that unless otherwise specified, denotes January 1, 2011 through June 30, 2011. User Support remains an important element of the OLCF operations, with the philosophy 'whatever it takes' to enable successful research. Impact of this center-wide activity is reflected by the user survey results that show users are 'very satisfied.' The OLCF continues to aggressively pursue outreach and training activities to promote awareness - and effective use - of U.S. leadership-class resources (Reference Section 2). The OLCF continues to meet and in many cases exceed DOE metrics for capability usage (35% target in CY 2010, delivered 39%; 40% target in CY 2011, 54% January 1, 2011 through June 30, 2011). The Schedule Availability (SA) and Overall Availability (OA) for Jaguar were exceeded in CY2010. Given the solution to the VRM problem the SA and OA for Jaguar in CY 2011 are expected to exceed the target metrics of 95% and 90%, respectively (Reference Section 3). Numerous and wide-ranging research accomplishments, scientific support, and technological innovations are more fully described in Sections 4 and 6 and reflect OLCF leadership in enabling high-impact science solutions and vision in creating an exascale-ready center. Financial Management (Section 5) and Risk Management (Section 7) are carried out using best practices approved of by DOE. The OLCF has a valid cyber security plan and Authority to Operate (Section 8). The proposed metrics for 2012 are reflected in Section 9.« less

  10. Fermilab | Science | Questions for the Universe | The Particle World | Why

    Science.gov Websites

    effects observed so far are insufficient to explain this predominance. The current program of experiments suggest significant effects in the bound state with the strange quark, Bs. Physicists at the Tevatron made . Lattice Computational Facilities offer great promise for the calculation of the effects of the strong

  11. Games and Simulations for Climate, Weather and Earth Science Education

    NASA Astrophysics Data System (ADS)

    Russell, R. M.

    2014-12-01

    We will demonstrate several interactive, computer-based simulations, games, and other interactive multimedia. These resources were developed for weather, climate, atmospheric science, and related Earth system science education. The materials were created by the UCAR Center for Science Education. These materials have been disseminated via our web site (SciEd.ucar.edu), webinars, online courses, teacher workshops, and large touchscreen displays in weather and Sun-Earth connections exhibits in NCAR's Mesa Lab facility in Boulder, Colorado. Our group has also assembled a web-based list of similar resources, especially simulations and games, from other sources that touch upon weather, climate, and atmospheric science topics. We'll briefly demonstrate this directory. More info available at: scied.ucar.edu/events/agu-2014-games-simulations-sessions

  12. Games and Simulations for Climate, Weather and Earth Science Education

    NASA Astrophysics Data System (ADS)

    Russell, R. M.

    2013-12-01

    We will demonstrate several interactive, computer-based simulations, games, and other interactive multimedia. These resources were developed for weather, climate, atmospheric science, and related Earth system science education. The materials were created by education groups at NCAR/UCAR in Boulder, primarily Spark and the COMET Program. These materials have been disseminated via Spark's web site (spark.ucar.edu), webinars, online courses, teacher workshops, and large touchscreen displays in weather and Sun-Earth connections exhibits in NCAR's Mesa Lab facility. Spark has also assembled a web-based list of similar resources, especially simulations and games, from other sources that touch upon weather, climate, and atmospheric science topics. We'll briefly demonstrate this directory.

  13. Research briefing on contemporary problems in plasma science

    NASA Technical Reports Server (NTRS)

    1991-01-01

    An overview is presented of the broad perspective of all plasma science. Detailed discussions are given of scientific opportunities in various subdisciplines of plasma science. The first subdiscipline to be discussed is the area where the contemporary applications of plasma science are the most widespread, low temperature plasma science. Opportunities for new research and technology development that have emerged as byproducts of research in magnetic and inertial fusion are then highlighted. Then follows a discussion of new opportunities in ultrafast plasma science opened up by recent developments in laser and particle beam technology. Next, research that uses smaller scale facilities is discussed, first discussing non-neutral plasmas, and then the area of basic plasma experiments. Discussions of analytic theory and computational plasma physics and of space and astrophysical plasma physics are then presented.

  14. Power monitoring and control for large scale projects: SKA, a case study

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis

    2016-07-01

    Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energymore » assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.« less

  16. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  17. Library Automation Design for Visually Impaired People

    ERIC Educational Resources Information Center

    Yurtay, Nilufer; Bicil, Yucel; Celebi, Sait; Cit, Guluzar; Dural, Deniz

    2011-01-01

    Speech synthesis is a technology used in many different areas in computer science. This technology can bring a solution to reading activity of visually impaired people due to its text to speech conversion. Based on this problem, in this study, a system is designed needed for a visually impaired person to make use of all the library facilities in…

  18. Opening Comments: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2009-07-01

    Welcome to San Diego and the 2009 SciDAC conference. Over the next four days, I would like to present an assessment of the SciDAC program. We will look at where we've been, how we got to where we are and where we are going in the future. Our vision is to be first in computational science, to be best in class in modeling and simulation. When Ray Orbach asked me what I would do, in my job interview for the SciDAC Director position, I said we would achieve that vision. And with our collective dedicated efforts, we have managed to achieve this vision. In the last year, we have now the most powerful supercomputer for open science, Jaguar, the Cray XT system at the Oak Ridge Leadership Computing Facility (OLCF). We also have NERSC, probably the best-in-the-world program for productivity in science that the Office of Science so depends on. And the Argonne Leadership Computing Facility offers architectural diversity with its IBM Blue Gene/P system as a counterbalance to Oak Ridge. There is also ESnet, which is often understated—the 40 gigabit per second dual backbone ring that connects all the labs and many DOE sites. In the President's Recovery Act funding, there is exciting news that ESnet is going to build out to a 100 gigabit per second network using new optical technologies. This is very exciting news for simulations and large-scale scientific facilities. But as one noted SciDAC luminary said, it's not all about the computers—it's also about the science—and we are also achieving our vision in this area. Together with having the fastest supercomputer for science, at the SC08 conference, SciDAC researchers won two ACM Gordon Bell Prizes for the outstanding performance of their applications. The DCA++ code, which solves some very interesting problems in materials, achieved a sustained performance of 1.3 petaflops, an astounding result and a mark I suspect will last for some time. The LS3DF application for studying nanomaterials also required the development of a new and novel algorithm to produce results up to 400 times faster than a similar application, and was recognized with a prize for algorithm innovation—a remarkable achievement. Day one of our conference will include examples of petascale science enabled at the OLCF. Although Jaguar has not been officially commissioned, it has gone through its acceptance tests, and during its shakedown phase there have been pioneer applications used for the acceptance tests, and they are running at scale. These include applications in the areas of astrophysics, biology, chemistry, combustion, fusion, geosciences, materials science, nuclear energy and nuclear physics. We also have a whole compendium of science we do at our facilities; these have been documented and reviewed at our last SciDAC conference. Many of these were highlighted in our Breakthroughs Report. One session at this week's conference will feature a cross-section of these breakthroughs. In the area of scalable electromagnetic simulations, the Auxiliary-space Maxwell Solver (AMS) uses specialized finite element discretizations and multigrid-based techniques, which decompose the original problem into easier-to-solve subproblems. Congratulations to the mathematicians on this. Another application on the list of breakthroughs was the authentication of PETSc, which provides scalable solvers used in many DOE applications and has solved problems with over 3 billion unknowns and scaled to over 16,000 processors on DOE leadership-class computers. This is becoming a very versatile and useful toolkit to achieve performance at scale. With the announcement of SIAM's first class of Fellows, we are remarkably well represented. Of the group of 191, more than 40 of these Fellows are in the 'DOE space.' We are so delighted that SIAM has recognized them for their many achievements. In the coming months, we will illustrate our leadership in applied math and computer science by looking at our contributions in the areas of programming models, development and performance tools, math libraries, system software, collaboration, and visualization and data analytics. This is a large and diverse list of libraries. We have asked for two panels, one chaired by David Keyes and composed of many of the nation's leading mathematicians, to produce a report on the most significant accomplishments in applied mathematics over the last eight years, taking us back to the start of the SciDAC program. In addition, we have a similar panel in computer science to be chaired by Kathy Yelick. They are going to identify the computer science accomplishments of the past eight years. These accomplishments are difficult to get a handle on, and I'm looking forward to this report. We will also have a follow-on to our report on breakthroughs in computational science and this will also go back eight years, looking at the many accomplishments under the SciDAC and INCITE programs. This will be chaired by Tony Mezzacappa. So, where are we going in the SciDAC program? It might help to take a look at computational science and how it got started. I go back to Ken Wilson, who made the model and has written on computational science and computational science education. His model was thus: The computational scientist plays the role of the experimentalist, and the math and CS researchers play the role of theorists, and the computers themselves are the experimental apparatus. And that in simulation science, we are carrying out numerical experiments as to the nature of physical and biological sciences. Peter Lax, in the same time frame, developed a report on large-scale computing in science and engineering. Peter remarked, 'Perhaps the most important applications of scientific computing come not in the solution of old problems, but in the discovery of new phenomena through numerical experimentation.' And in the early years, I think the person who provided the most guidance, the most innovation and the most vision for where the future might lie was Ed Oliver. Ed Oliver died last year. Ed did a number of things in science. He had this personality where he knew exactly what to do, but he preferred to stay out of the limelight so that others could enjoy the fruits of his vision. We in the SciDAC program and ASCR Facilities are still enjoying the benefits of his vision. We will miss him. Twenty years after Ken Wilson, Ray Orbach laid out the fundamental premise for SciDAC in an interview that appeared in SciDAC Review: 'SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing. ' As you look at the Lax report from 1982, it talks about how 'Future significant improvements may have to come from architectures embodying parallel processing elements—perhaps several thousands of processors.' And it continues, 'esearch in languages, algorithms and numerical analysis will be crucial in learning to exploit these new architectures fully.' In the early '90s, Sterling, Messina and Smith developed a workshop report on petascale computing and concluded, 'A petaflops computer system will be feasible in two decades, or less, and rely in part on the continual advancement of the semiconductor industry both in speed enhancement and cost reduction through improved fabrication processes.' So they were not wrong, and today we are embarking on a forward look that is at a different scale, the exascale, going to 1018 flops. In 2007, Stevens, Simon and Zacharia chaired a series of town hall meetings looking at exascale computing, and in their report wrote, 'Exascale computer systems are expected to be technologically feasible within the next 15 years, or perhaps sooner. These systems will push the envelope in a number of important technologies: processor architecture, scale of multicore integration, power management and packaging.' The concept of computing on the Jaguar computer involves hundreds of thousands of cores, as do the IBM systems that are currently out there. So the scale of computing with systems with billions of processors is staggering to me, and I don't know how the software and math folks feel about it. We have now embarked on a road toward extreme scale computing. We have created a series of town hall meetings and we are now in the process of holding workshops that address what I call within the DOE speak 'the mission need,' or what is the scientific justification for computing at that scale. We are going to have a total of 13 workshops. The workshops on climate, high energy physics, nuclear physics, fusion, and nuclear energy have been held. The report from the workshop on climate is actually out and available, and the other reports are being completed. The upcoming workshops are on biology, materials, and chemistry; and workshops that engage science for nuclear security are a partnership between NNSA and ASCR. There are additional workshops on applied math, computer science, and architecture that are needed for computing at the exascale. These extreme scale workshops will provide the foundation in our office, the Office of Science, the NNSA and DOE, and we will engage the National Science Foundation and the Department of Defense as partners. We envision a 10-year program for an exascale initiative. It will be an integrated R&D program initially—you can think about five years for research and development—that would be in hardware, operating systems, file systems, networking and so on, as well as software for applications. Application software and the operating system and the hardware all need to be bundled in this period so that at the end the system will execute the science applications at scale. We also believe that this process will have to have considerable investment from the manufacturers and vendors to be successful. We have formed laboratory, university and industry working groups to start this process and formed a panel to look at where SciDAC needs to go to compute at the extreme scale, and we have formed an executive committee within the Office of Science and the NNSA to focus on these activities. We will have outreach to DoD in the next few months. We are anticipating a solicitation within the next two years in which we will compete this bundled R&D process. We don't know how we will incorporate SciDAC into extreme scale computing, but we do know there will be many challenges. And as we have shown over the years, we have the expertise and determination to surmount these challenges.

  19. Design concepts for the Centrifuge Facility Life Sciences Glovebox

    NASA Technical Reports Server (NTRS)

    Sun, Sidney C.; Horkachuck, Michael J.; Mckeown, Kellie A.

    1989-01-01

    The Life Sciences Glovebox will provide the bioisolated environment to support on-orbit operations involving non-human live specimens and samples for human life sceinces experiments. It will be part of the Centrifuge Facility, in which animal and plant specimens are housed in bioisolated Habitat modules and transported to the Glovebox as part of the experiment protocols supported by the crew. At the Glovebox, up to two crew members and two habitat modules must be accommodated to provide flexibility and support optimal operations. This paper will present several innovative design concepts that attempt to satisfy the basic Glovebox requirements. These concepts were evaluated for ergonomics and ease of operations using computer modeling and full-scale mockups. The more promising ideas were presented to scientists and astronauts for their evaluation. Their comments, and the results from other evaluations are presented. Based on the evaluations, the authors recommend designs and features that will help optimize crew performance and facilitate science accommodations, and specify problem areas that require further study.

  20. Networking Technologies Enable Advances in Earth Science

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory; Freeman, Kenneth; Gilstrap, Raymond; Beck, Richard

    2004-01-01

    This paper describes an experiment to prototype a new way of conducting science by applying networking and distributed computing technologies to an Earth Science application. A combination of satellite, wireless, and terrestrial networking provided geologists at a remote field site with interactive access to supercomputer facilities at two NASA centers, thus enabling them to validate and calibrate remotely sensed geological data in near-real time. This represents a fundamental shift in the way that Earth scientists analyze remotely sensed data. In this paper we describe the experiment and the network infrastructure that enabled it, analyze the data flow during the experiment, and discuss the scientific impact of the results.

  1. Quantum Machine Learning

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2018-01-01

    Quantum computing promises an unprecedented ability to solve intractable problems by harnessing quantum mechanical effects such as tunneling, superposition, and entanglement. The Quantum Artificial Intelligence Laboratory (QuAIL) at NASA Ames Research Center is the space agency's primary facility for conducting research and development in quantum information sciences. QuAIL conducts fundamental research in quantum physics but also explores how best to exploit and apply this disruptive technology to enable NASA missions in aeronautics, Earth and space sciences, and space exploration. At the same time, machine learning has become a major focus in computer science and captured the imagination of the public as a panacea to myriad big data problems. In this talk, we will discuss how classical machine learning can take advantage of quantum computing to significantly improve its effectiveness. Although we illustrate this concept on a quantum annealer, other quantum platforms could be used as well. If explored fully and implemented efficiently, quantum machine learning could greatly accelerate a wide range of tasks leading to new technologies and discoveries that will significantly change the way we solve real-world problems.

  2. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  3. Advancing Capabilities for Understanding the Earth System Through Intelligent Systems, the NSF Perspective

    NASA Astrophysics Data System (ADS)

    Gil, Y.; Zanzerkia, E. E.; Munoz-Avila, H.

    2015-12-01

    The National Science Foundation (NSF) Directorate for Geosciences (GEO) and Directorate for Computer and Information Science (CISE) acknowledge the significant scientific challenges required to understand the fundamental processes of the Earth system, within the atmospheric and geospace, Earth, ocean and polar sciences, and across those boundaries. A broad view of the opportunities and directions for GEO are described in the report "Dynamic Earth: GEO imperative and Frontiers 2015-2020." Many of the aspects of geosciences research, highlighted both in this document and other community grand challenges, pose novel problems for researchers in intelligent systems. Geosciences research will require solutions for data-intensive science, advanced computational capabilities, and transformative concepts for visualizing, using, analyzing and understanding geo phenomena and data. Opportunities for the scientific community to engage in addressing these challenges are available and being developed through NSF's portfolio of investments and activities. The NSF-wide initiative, Cyberinfrastructure Framework for 21st Century Science and Engineering (CIF21), looks to accelerate research and education through new capabilities in data, computation, software and other aspects of cyberinfrastructure. EarthCube, a joint program between GEO and the Advanced Cyberinfrastructure Division, aims to create a well-connected and facile environment to share data and knowledge in an open, transparent, and inclusive manner, thus accelerating our ability to understand and predict the Earth system. EarthCube's mission opens an opportunity for collaborative research on novel information systems enhancing and supporting geosciences research efforts. NSF encourages true, collaborative partnerships between scientists in computer sciences and the geosciences to meet these challenges.

  4. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  5. NASA Virtual Glovebox: An Immersive Virtual Desktop Environment for Training Astronauts in Life Science Experiments

    NASA Technical Reports Server (NTRS)

    Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard

    2003-01-01

    The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.

  6. ICESat Science Investigator led Processing System (I-SIPS)

    NASA Astrophysics Data System (ADS)

    Bhardwaj, S.; Bay, J.; Brenner, A.; Dimarzio, J.; Hancock, D.; Sherman, M.

    2003-12-01

    The ICESat Science Investigator-led Processing System (I-SIPS) generates the GLAS standard data products. It consists of two main parts the Scheduling and Data Management System (SDMS) and the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software. The system has been operational since the successful launch of ICESat. It ingests data from the GLAS instrument, generates GLAS data products, and distributes them to the GLAS Science Computing Facility (SCF), the Instrument Support Facility (ISF) and the National Snow and Ice Data Center (NSIDC) ECS DAAC. The SDMS is the Planning, Scheduling and Data Management System that runs the GLAS Science Algorithm Software (GSAS). GSAS is based on the Algorithm Theoretical Basis Documents provided by the Science Team and is developed independently of SDMS. The SDMS provides the processing environment to plan jobs based on existing data, control job flow, data distribution, and archiving. The SDMS design is based on a mission-independent architecture that imposes few constraints on the science code thereby facilitating I-SIPS integration. I-SIPS currently works in an autonomous manner to ingest GLAS instrument data, distribute this data to the ISF, run the science processing algorithms to produce the GLAS standard products, reprocess data when new versions of science algorithms are released, and distributes the products to the SCF, ISF, and NSIDC. I-SIPS has a proven performance record, delivering the data to the SCF within hours after the initial instrument activation. The I-SIPS design philosophy gives this system a high potential for reuse in other science missions.

  7. The Magellan Final Report on Cloud Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ,; Coghlan, Susan; Yelick, Katherine

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less

  8. A prototype Upper Atmospheric Research Collaboratory (UARC)

    NASA Technical Reports Server (NTRS)

    Clauer, C. R.; Atkins, D. E; Weymouth, T. E.; Olson, G. M.; Niciejewski, R.; Finholt, T. A.; Prakash, A.; Rasmussen, C. E.; Killeen, T.; Rosenberg, T. J.

    1995-01-01

    The National Collaboratory concept has great potential for enabling 'critical mass' working groups and highly interdisciplinary research projects. We report here on a new program to build a prototype collaboratory using the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland and a group of associated scientists. The Upper Atmospheric Research Collaboratory (UARC) is a joint venture of researchers in upper atmospheric and space science, computer science, and behavioral science to develop a testbed for collaborative remote research. We define the 'collaboratory' as an advanced information technology environment which enables teams to work together over distance and time on a wide variety of intellectual tasks. It provides: (1) human-to-human communications using shared computer tools and work spaces; (2) group access and use of a network of information, data, and knowledge sources; and (3) remote access and control of instruments for data acquisition. The UARC testbed is being implemented to support a distributed community of space scientists so that they have network access to the remote instrument facility in Kangerlussuaq and are able to interact among geographically distributed locations. The goal is to enable them to use the UARC rather than physical travel to Greenland to conduct team research campaigns. Even on short notice through the collaboratory from their home institutions, participants will be able to meet together to operate a battery of remote interactive observations and to acquire, process, and interpret the data.

  9. Challenges in integrating multidisciplinary data into a single e-infrastructure

    NASA Astrophysics Data System (ADS)

    Atakan, Kuvvet; Jeffery, Keith G.; Bailo, Daniele; Harrison, Matthew

    2015-04-01

    The European Plate Observing System (EPOS) aims to create a pan-European infrastructure for solid Earth science to support a safe and sustainable society. The mission of EPOS is to monitor and understand the dynamic and complex Earth system by relying on new e-science opportunities and integrating diverse and advanced Research Infrastructures in Europe for solid Earth Science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. EPOS will improve our ability to better manage the use of the subsurface of the Earth. Through integration of data, models and facilities EPOS will allow the Earth Science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and to human welfare. EPOS is now getting into its Implementation Phase (EPOS-IP). One of the main challenges during the implementation phase is the integration of multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into a platform "the ICS system" that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. This requires dedicated tasks for interactions with the various TCS-WPs, as well as the various distributed ICS (ICS-Ds), such as High Performance Computing (HPC) facilities, large scale data storage facilities, complex processing and visualization tools etc. Computational Earth Science (CES) services are identified as a transversal activity and as such need to be harmonized and provided within the ICS. In order to develop a metadata catalogue and the ICS system, the content from the entire spectrum of services included in TCS, ICS-Ds as well as CES activities, need to be organized in a systematic manner taking into account global and European IT-standards, while complying with the user needs and data provider requirements.

  10. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  11. Optimization of knowledge-based systems and expert system building tools

    NASA Technical Reports Server (NTRS)

    Yasuda, Phyllis; Mckellar, Donald

    1993-01-01

    The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.

  12. TomoBank: a tomographic data repository for computational x-ray science

    DOE PAGES

    De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; ...

    2018-02-08

    There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less

  13. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostuk, M.; Uram, T. D.; Evans, T.

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  14. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE PAGES

    Kostuk, M.; Uram, T. D.; Evans, T.; ...

    2018-02-01

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  15. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  16. EOS Laser Atmosphere Wind Sounder (LAWS) investigation

    NASA Technical Reports Server (NTRS)

    Emmitt, George D.

    1991-01-01

    The related activities of the contract are outlined for the first year. These include: (1) attend team member meetings; (2) support EOS Project with science related activities; (3) prepare and Execution Phase plan; and (4) support LAWS and EOSDIS related work. Attached to the report is an appendix, 'LAWS Algorithm Development and Evaluation Laboratory (LADEL)'. Also attached is a copy of a proposal to the NASA EOS for 'LAWS Sampling Strategies and Wind Computation Algorithms -- Storm-Top Divergence Studies. Volume I: Investigation and Technical Plan, Data Plan, Computer Facilities Plan, Management Plan.'

  17. The Future is Hera! Analyzing Astronomical Over the Internet

    NASA Technical Reports Server (NTRS)

    Valencic, L. A.; Chai, P.; Pence, W.; Shafer, R.; Snowden, S.

    2008-01-01

    Hera is the data processing facility provided by the High Energy Astrophysics Science Archive Research Center (HEASARC) at the NASA Goddard Space Flight Center for analyzing astronomical data. Hera provides all the pre-installed software packages, local disk space, and computing resources need to do general processing of FITS format data files residing on the users local computer, and to do research using the publicly available data from the High ENergy Astrophysics Division. Qualified students, educators and researchers may freely use the Hera services over the internet of research and educational purposes.

  18. Foreign Military Sales Pricing Principles for Electronic Technical Manuals

    DTIC Science & Technology

    2004-06-01

    companies provide benefits such as flexible hours, flexible days, and telecommuting . This information is useful because facilities costs and overhead can...personnel are listed below: Occupation Title Employment (1) Median Hourly Mean Hourly Mean Annual (2) Computer and Mathematical Science...be minimized or significantly reduced for companies providing this benefit . There was one disturbing statistic from this survey. Despite the

  19. SPAN: Astronomy and astrophysics

    NASA Technical Reports Server (NTRS)

    Thomas, Valerie L.; Green, James L.; Warren, Wayne H., Jr.; Lopez-Swafford, Brian

    1987-01-01

    The Space Physics Analysis Network (SPAN) is a multi-mission, correlative data comparison network which links science research and data analysis computers in the U.S., Canada, and Europe. The purpose of this document is to provide Astronomy and Astrophysics scientists, currently reachable on SPAN, with basic information and contacts for access to correlative data bases, star catalogs, and other astrophysic facilities accessible over SPAN.

  20. Planetary Radio Interferometry and Doppler Experiment (PRIDE) technique: A test case of the Mars Express Phobos Flyby. II. Doppler tracking: Formulation of observed and computed values, and noise budget

    NASA Astrophysics Data System (ADS)

    Bocanegra-Bahamón, T. M.; Molera Calvés, G.; Gurvits, L. I.; Duev, D. A.; Pogrebenko, S. V.; Cimò, G.; Dirkx, D.; Rosenblatt, P.

    2018-01-01

    Context. Closed-loop Doppler data obtained by deep space tracking networks, such as the NASA Deep Space Network (DSN) and the ESA tracking station network (Estrack), are routinely used for navigation and science applications. By shadow tracking the spacecraft signal, Earth-based radio telescopes involved in the Planetary Radio Interferometry and Doppler Experiment (PRIDE) can provide open-loop Doppler tracking data only when the dedicated deep space tracking facilities are operating in closed-loop mode. Aims: We explain the data processing pipeline in detail and discuss the capabilities of the technique and its potential applications in planetary science. Methods: We provide the formulation of the observed and computed values of the Doppler data in PRIDE tracking of spacecraft and demonstrate the quality of the results using an experiment with the ESA Mars Express spacecraft as a test case. Results: We find that the Doppler residuals and the corresponding noise budget of the open-loop Doppler detections obtained with the PRIDE stations compare to the closed-loop Doppler detections obtained with dedicated deep space tracking facilities.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The vision described here builds on the present U.S. activities in fusion plasma and materials science relevant to the energy goal and extends plasma science at the frontier of discovery. The plan is founded on recommendations made by the National Academies, a number of recent studies by the Fusion Energy Sciences Advisory Committee (FESAC), and the Administration’s views on the greatest opportunities for U.S. scientific leadership.This report highlights five areas of critical importance for the U.S. fusion energy sciences enterprise over the next decade: 1) Massively parallel computing with the goal of validated whole-fusion-device modeling will enable a transformation inmore » predictive power, which is required to minimize risk in future fusion energy development steps; 2) Materials science as it relates to plasma and fusion sciences will provide the scientific foundations for greatly improved plasma confinement and heat exhaust; 3) Research in the prediction and control of transient events that can be deleterious to toroidal fusion plasma confinement will provide greater confidence in machine designs and operation with stable plasmas; 4) Continued stewardship of discovery in plasma science that is not expressly driven by the energy goal will address frontier science issues underpinning great mysteries of the visible universe and help attract and retain a new generation of plasma/fusion science leaders; 5) FES user facilities will be kept world-leading through robust operations support and regular upgrades. Finally, we will continue leveraging resources among agencies and institutions and strengthening our partnerships with international research facilities.« less

  2. Dawn Usage, Scheduling, and Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louis, S

    2009-11-02

    This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less

  3. Workers in SSPF monitor Multi-Equipment Interface Test.

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Workers in the Space Station Processing Facility control room monitor computers during a Multi-Equipment Interface Test (MEIT) in the U.S. Lab Destiny. Members of the STS-98 crew are taking part in the MEIT checking out some of the equipment in the Lab. During the STS-98 mission, the crew will install the Lab on the station during a series of three space walks. The crew comprises five members: Commander Kenneth D. Cockrell, Pilot Mark L. Polansky, and Mission Specialists Robert L. Curbeam Jr., Thomas D. Jones (Ph.D.) and Marsha S. Ivins. The mission will provide the station with science research facilities and expand its power, life support and control capabilities. The U.S. Laboratory Module continues a long tradition of microgravity materials research, first conducted by Skylab and later Shuttle and Spacelab missions. Destiny is expected to be a major feature in future research, providing facilities for biotechnology, fluid physics, combustion, and life sciences research. The Lab is planned for launch aboard Space Shuttle Atlantis on the sixth ISS flight, currently targeted no earlier than Aug. 19, 2000.

  4. KSC-00pp0188

    NASA Image and Video Library

    2000-02-03

    Workers in the Space Station Processing Facility control room monitor computers during a Multi-Equipment Interface Test (MEIT) in the U.S. Lab Destiny. Members of the STS-98 crew are taking part in the MEIT checking out some of the equipment in the Lab. During the STS-98 mission, the crew will install the Lab on the station during a series of three space walks. The crew comprises five members: Commander Kenneth D. Cockrell, Pilot Mark L. Polansky, and Mission Specialists Robert L. Curbeam Jr., Thomas D. Jones (Ph.D.) and Marsha S. Ivins. The mission will provide the station with science research facilities and expand its power, life support and control capabilities. The U.S. Laboratory Module continues a long tradition of microgravity materials research, first conducted by Skylab and later Shuttle and Spacelab missions. Destiny is expected to be a major feature in future research, providing facilities for biotechnology, fluid physics, combustion, and life sciences research. The Lab is planned for launch aboard Space Shuttle Atlantis on the sixth ISS flight, currently targeted no earlier than Aug. 19, 2000

  5. Enabling Earth Science: The Facilities and People of the NCCS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.

  6. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such as the national and regional electricity grid, carbon sequestration, virtual engineering, and the nuclear fuel cycle. The successes of the first five years of SciDAC have demonstrated the power of using advanced computing to enable scientific discovery. One measure of this success could be found in the President’s State of the Union address in which President Bush identified ‘supercomputing’ as a major focus area of the American Competitiveness Initiative. Funds were provided in the FY 2007 President’s Budget request to increase the size of the NERSC-5 procurement to between 100-150 teraflops, to upgrade the LCF Cray XT3 at Oak Ridge to 250 teraflops and acquire a 100 teraflop IBM BlueGene/P to establish the Leadership computing facility at Argonne. We believe that we are on a path to establish a petascale computing resource for open science by 2009. We must develop software tools, packages, and libraries as well as the scientific application software that will scale to hundreds of thousands of processors. Computer scientists from universities and the DOE’s national laboratories will be asked to collaborate on the development of the critical system software components such as compilers, light-weight operating systems and file systems. Standing up these large machines will not be business as usual for ASCR. We intend to develop a series of interconnected projects that identify cost, schedule, risks, and scope for the upgrades at the LCF at Oak Ridge, the establishment of the LCF at Argonne, and the development of the software to support these high-end computers. The critical first step in defining the scope of the project is to identify a set of early application codes for each leadership class computing facility. These codes will have access to the resources during the commissioning phase of the facility projects and will be part of the acceptance tests for the machines. Applications will be selected, in part, by breakthrough science, scalability, and ability to exercise key hardware and software components. Possible early applications might include climate models; studies of the magnetic properties of nanoparticles as they relate to ultra-high density storage media; the rational design of chemical catalysts, the modeling of combustion processes that will lead to cleaner burning coal, and fusion and astrophysics research. I have presented just a few of the challenges that we look forward to on the road to petascale computing. Our road to petascale science might be paraphrased by the quote from e e cummings, ‘somewhere I have never traveled, gladly beyond any experience . . .’

  7. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    NASA Technical Reports Server (NTRS)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  8. Python in the NERSC Exascale Science Applications Program for Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack

    We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less

  9. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  10. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  11. ORNL Sustainable Campus Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halford, Christopher K

    2012-01-01

    The research conducted at Oak Ridge National Laboratory (ORNL) spans many disciplines and has the potential for far-reaching impact in many areas of everyday life. ORNL researchers and operations staff work on projects in areas as diverse as nuclear power generation, transportation, materials science, computing, and building technologies. As the U.S. Department of Energy s (DOE) largest science and energy research facility, ORNL seeks to establish partnerships with industry in the development of innovative new technologies. The primary focus of this current research deals with developing technologies which improve or maintain the quality of life for humans while reducing themore » overall impact on the environment. In its interactions with industry, ORNL serves as both a facility for sustainable research, as well as a representative of DOE to the private sector. For these reasons it is important that the everyday operations of the Laboratory reflect a dedication to the concepts of stewardship and sustainability.« less

  12. Airborne Remote Sensing (ARS) for Agricultural Research and Commercialization Applications

    NASA Technical Reports Server (NTRS)

    Narayanan, Ram; Bowen, Brent D.; Nickerson, Jocelyn S.

    2002-01-01

    Tremendous advances in remote sensing technology and computing power over the last few decades are now providing scientists with the opportunity to investigate, measure, and model environmental patterns and processes with increasing confidence. Such advances are being pursued by the Nebraska Remote Sensing Facility, which consists of approximately 30 faculty members and is very competitive with other institutions in the depth of the work that is accomplished. The development of this facility targeted at applications, commercialization, and education programs in the area of precision agriculture provides a unique opportunity. This critical area is within the scope of NASA goals and objectives of NASA s Applications, Technology Transfer, Commercialization, and Education Division and the Earth Science Enterprise. This innovative integration of Aerospace (Aeronautics) Technology Enterprise applications with other NASA enterprises serves as a model of cross-enterprise transfer of science with specific commercial applications.

  13. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  14. Dehydration of 1-octadecanol over H-BEA: A combined experimental and computational study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Wenji; Liu, Yuanshuai; Barath, Eszter

    Liquid phase dehydration of 1-octdecanol, which is intermediately formed during the hydrodeoxygenation of microalgae oil, has been explored in a combined experimental and computational study. The alkyl chain of C18 alcohol interacts with acid sites during diffusion inside the zeolite pores, resulting in an inefficient utilization of the Brønsted acid sites for samples with high acid site concentrations. The parallel intra- and inter- molecular dehydration pathways having different activation energies pass through alternative reaction intermediates. Formation of surface-bound alkoxide species is the rate-limiting step during intramolecular dehydration, whereas intermolecular dehydration proceeds via a bulky dimer intermediate. Octadecene is the primarymore » dehydration product over H-BEA at 533 K. Despite of the main contribution of Brønsted acid sites towards both dehydration pathways, Lewis acid sites are also active in the formation of dioctadecyl ether. The intramolecular dehydration to octadecene and cleavage of the intermediately formed ether, however, require strong BAS. L. Wang, D. Mei and J. A. Lercher, acknowledge the partial support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less

  15. The NHERI RAPID Facility: Enabling the Next-Generation of Natural Hazards Reconnaissance

    NASA Astrophysics Data System (ADS)

    Wartman, J.; Berman, J.; Olsen, M. J.; Irish, J. L.; Miles, S.; Gurley, K.; Lowes, L.; Bostrom, A.

    2017-12-01

    The NHERI post-disaster, rapid response research (or "RAPID") facility, headquartered at the University of Washington (UW), is a collaboration between UW, Oregon State University, Virginia Tech, and the University of Florida. The RAPID facility will enable natural hazard researchers to conduct next-generation quick response research through reliable acquisition and community sharing of high-quality, post-disaster data sets that will enable characterization of civil infrastructure performance under natural hazard loads, evaluation of the effectiveness of current and previous design methodologies, understanding of socio-economic dynamics, calibration of computational models used to predict civil infrastructure component and system response, and development of solutions for resilient communities. The facility will provide investigators with the hardware, software and support services needed to collect, process and assess perishable interdisciplinary data following extreme natural hazard events. Support to the natural hazards research community will be provided through training and educational activities, field deployment services, and by promoting public engagement with science and engineering. Specifically, the RAPID facility is undertaking the following strategic activities: (1) acquiring, maintaining, and operating state-of-the-art data collection equipment; (2) developing and supporting mobile applications to support interdisciplinary field reconnaissance; (3) providing advisory services and basic logistics support for research missions; (4) facilitating the systematic archiving, processing and visualization of acquired data in DesignSafe-CI; (5) training a broad user base through workshops and other activities; and (6) engaging the public through citizen science, as well as through community outreach and education. The facility commenced operations in September 2016 and will begin field deployments beginning in September 2018. This poster will provide an overview of the vision for the RAPID facility, the equipment that will be available for use, the facility's operations, and opportunities for user training and facility use.

  16. Experimental Physical Sciences Vistas: MaRIE (draft)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shlachter, Jack

    To achieve breakthrough scientific discoveries in the 21st century, a convergence and integration of world-leading experimental facilities and capabilities with theory, modeling, and simulation is necessary. In this issue of Experimental Physical Sciences Vistas, I am excited to present our plans for Los Alamos National Laboratory's future flagship experimental facility, MaRIE (Matter-Radiation Interactions in Extremes). MaRIE is a facility that will provide transformational understanding of matter in extreme conditions required to reduce or resolve key weapons performance uncertainties, develop the materials needed for advanced energy systems, and transform our ability to create materials by design. Our unique role in materialsmore » science starting with the Manhattan Project has positioned us well to develop a contemporary materials strategy pushing the frontiers of controlled functionality - the design and tailoring of a material for the unique demands of a specific application. Controlled functionality requires improvement in understanding of the structure and properties of materials in order to synthesize and process materials with unique characteristics. In the nuclear weapons program today, improving data and models to increase confidence in the stockpile can take years from concept to new knowledge. Our goal with MaRIE is to accelerate this process by enhancing predictive capability - the ability to compute a priori the observables of an experiment or test and pertinent confidence intervals using verified and validated simulation tools. It is a science-based approach that includes the use of advanced experimental tools, theoretical models, and multi-physics codes, simultaneously dealing with multiple aspects of physical operation of a system that are needed to develop an increasingly mature predictive capability. This same approach is needed to accelerate improvements to other systems such as nuclear reactors. MaRIE will be valuable to many national security science challenges. Our first issue of Vistas focused on our current national user facilities (the Los Alamos Neutron Science Center [LANSCE], the National High Magnetic Field Laboratory-Pulsed Field Facility, and the Center for Integrated Nanotechnologies) and the vitality they bring to our Laboratory. These facilities are a magnet for students, postdoctoral researchers, and staff members from all over the world. This, in turn, allows us to continue to develop and maintain our strong staff across the relevant disciplines and conduct world-class discovery science. The second issue of Vistas was devoted entirely to the Laboratory's materials strategy - one of the three strategic science thrusts for the Laboratory. This strategy has helped focus our thinking for MaRIE. We believe there is a bright future in cutting-edge experimental materials research, and that a 21st-century facility with unique capability is necessary to fulfill this goal. The Laboratory has spent the last several years defining MaRIE, and this issue of Vistas presents our current vision of that facility. MaRIE will leverage LANSCE and our other user facilities, as well as our internal and external materials community for decades to come, giving Los Alamos a unique competitive advantage, advancing materials science for the Laboratory's missions and attracting and recruiting scientists of international stature. MaRIE will give the international materials research community a suite of tools capable of meeting a broad range of outstanding grand challenges.« less

  17. Life sciences space station planning document: A reference payload for the exobiology research facilities

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The Cosmic Dust Collection and Gas Grain Simulation Facilities represent collaborative efforts between the Life Sciences and Solar System Exploration Divisions designed to strengthen a natural exobiology/Planetary Sciences connection. The Cosmic Dust Collection Facility is a Planetary Science facility, with Exobiology a primary user. Conversely, the Gas Grain Facility is an exobiology facility, with Planetary Science a primary user. Requirements for the construction and operation of the two facilities, contained herein, were developed through joint workshops between the two disciplines, as were representative experiments comprising the reference payloads. In the case of the Gas Grain Simulation Facility, the astrophysics Division is an additional potential user, having participated in the workshop to select experiments and define requirements.

  18. Environmental control and life support systems analysis for a Space Station life sciences animal experiment

    NASA Technical Reports Server (NTRS)

    So, Kenneth T.; Hall, John B., Jr.; Thompson, Clifford D.

    1987-01-01

    NASA's Langley and Goddard facilities have evaluated the effects of animal science experiments on the Space Station's Environmental Control and Life Support System (ECLSS) by means of computer-aided analysis, assuming an animal colony consisting of 96 rodents and eight squirrel monkeys. Thirteen ECLSS options were established for the reclamation of metabolic oxygen and waste water. Minimum cost and weight impacts on the ECLSS are found to accrue to the system's operation in off-nominal mode, using electrochemical CO2 removal and a static feed electrolyzer for O2 generation.

  19. Inner-shell photoionization of atomic chlorine near the 2p-1 edge: a Breit-Pauli R-matrix calculation

    NASA Astrophysics Data System (ADS)

    Felfli, Z.; Deb, N. C.; Manson, S. T.; Hibbert, A.; Msezane, A. Z.

    2009-05-01

    An R-matrix calculation which takes into account relativistic effects via the Breit-Pauli (BP) operator is performed for photoionization cross sections of atomic Cl near the 2p threshold. The wavefunctions are constructed with orbitals generated from a careful large scale configuration interaction (CI) calculation with relativistic corrections using the CIV3 code of Hibbert [1] and Glass and Hibbert [2]. The results are contrasted with the calculation of Martins [3], which uses a CI with relativistic corrections, and compared with the most recent measurements [4]. [1] A. Hibbert, Comput. Phys. Commun. 9, 141 (1975) [2] R. Glass and A. Hibbert, Comput. Phys. Commun. 16, 19 (1978) [3] M. Martins, J. Phys. B 34, 1321 (2001) [4] D. Lindle et al (private communication) Research supported by U.S. DOE, Division of Chemical Sciences, NSF and CAU CFNM, NSF-CREST Program. Computing facilities at Queen's University of Belfast, UK and of DOE Office of Science, NERSC are appreciated.

  20. Computational methods in the exploration of the classical and statistical mechanics of celestial scale strings: Rotating Space Elevators

    NASA Astrophysics Data System (ADS)

    Knudsen, Steven; Golubovic, Leonardo

    2015-04-01

    With the advent of ultra-strong materials, the Space Elevator has changed from science fiction to real science. We discuss computational and theoretical methods we developed to explore classical and statistical mechanics of rotating Space Elevators (RSE). An RSE is a loopy string reaching deep into outer space. The floppy RSE loop executes a motion which is nearly a superposition of two rotations: geosynchronous rotation around the Earth, and yet another faster rotational motion of the string which goes on around a line perpendicular to the Earth at its equator. Strikingly, objects sliding along the RSE loop spontaneously oscillate between two turning points, one of which is close to the Earth (starting point) whereas the other one is deeply in the outer space. The RSE concept thus solves a major problem in space elevator science which is how to supply energy to the climbers moving along space elevator strings. The exploration of the dynamics of a floppy string interacting with objects sliding along it has required development of novel finite element algorithms described in this presentation. We thank Prof. Duncan Lorimer of WVU for kindly providing us access to his computational facility.

  1. Ames life science telescience testbed evaluation

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Johnson, Vicki; Vogelsong, Kristofer H.; Froloff, Walt

    1989-01-01

    Eight surrogate spaceflight mission specialists participated in a real-time evaluation of remote coaching using the Ames Life Science Telescience Testbed facility. This facility consisted of three remotely located nodes: (1) a prototype Space Station glovebox; (2) a ground control station; and (3) a principal investigator's (PI) work area. The major objective of this project was to evaluate the effectiveness of telescience techniques and hardware to support three realistic remote coaching science procedures: plant seed germinator charging, plant sample acquisition and preservation, and remote plant observation with ground coaching. Each scenario was performed by a subject acting as flight mission specialist, interacting with a payload operations manager and a principal investigator expert. All three groups were physically isolated from each other yet linked by duplex audio and color video communication channels and networked computer workstations. Workload ratings were made by the flight and ground crewpersons immediately after completing their assigned tasks. Time to complete each scientific procedural step was recorded automatically. Two expert observers also made performance ratings and various error assessments. The results are presented and discussed.

  2. Life sciences utilization of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Chambers, Lawrence P.

    1992-01-01

    Space Station Freedom will provide the United States' first permanently manned laboratory in space. It will allow, for the first time, long term systematic life sciences investigations in microgravity. This presentation provides a top-level overview of the planned utilization of Space Station Freedom by NASA's Life Sciences Division. The historical drivers for conducting life sciences research on a permanently manned laboratory in space as well as the advantages that a space station platform provides for life sciences research are discussed. This background information leads into a description of NASA's strategy for having a fully operational International Life Sciences Research Facility by the year 2000. Achieving this capability requires the development of the five discipline focused 'common core' facilities. Once developed, these facilities will be brought to the space station during the Man-Tended Capability phase, checked out and brought into operation. Their delivery must be integrated with the Space Station Freedom manifest. At the beginning of Permanent Manned Capability, the infrastructure is expected to be completed and the Life Sciences Division's SSF Program will become fully operational. A brief facility description, anticipated launch date and a focused objective is provided for each of the life sciences facilities, including the Biomedical Monitoring and Countermeasures (BMAC) Facility, Gravitational Biology Facility (GBF), Gas Grain Simulation Facility (GGSF), Centrifuge Facility (CF), and Controlled Ecological Life Support System (CELSS) Test Facility. In addition, hardware developed by other NASA organizations and the SSF International Partners for an International Life Sciences Research Facility is also discussed.

  3. New Trends in E-Science: Machine Learning and Knowledge Discovery in Databases

    NASA Astrophysics Data System (ADS)

    Brescia, Massimo

    2012-11-01

    Data mining, or Knowledge Discovery in Databases (KDD), while being the main methodology to extract the scientific information contained in Massive Data Sets (MDS), needs to tackle crucial problems since it has to orchestrate complex challenges posed by transparent access to different computing environments, scalability of algorithms, reusability of resources. To achieve a leap forward for the progress of e-science in the data avalanche era, the community needs to implement an infrastructure capable of performing data access, processing and mining in a distributed but integrated context. The increasing complexity of modern technologies carried out a huge production of data, whose related warehouse management and the need to optimize analysis and mining procedures lead to a change in concept on modern science. Classical data exploration, based on local user own data storage and limited computing infrastructures, is no more efficient in the case of MDS, worldwide spread over inhomogeneous data centres and requiring teraflop processing power. In this context modern experimental and observational science requires a good understanding of computer science, network infrastructures, Data Mining, etc. i.e. of all those techniques which fall into the domain of the so called e-science (recently assessed also by the Fourth Paradigm of Science). Such understanding is almost completely absent in the older generations of scientists and this reflects in the inadequacy of most academic and research programs. A paradigm shift is needed: statistical pattern recognition, object oriented programming, distributed computing, parallel programming need to become an essential part of scientific background. A possible practical solution is to provide the research community with easy-to understand, easy-to-use tools, based on the Web 2.0 technologies and Machine Learning methodology. Tools where almost all the complexity is hidden to the final user, but which are still flexible and able to produce efficient and reliable scientific results. All these considerations will be described in the detail in the chapter. Moreover, examples of modern applications offering to a wide variety of e-science communities a large spectrum of computational facilities to exploit the wealth of available massive data sets and powerful machine learning and statistical algorithms will be also introduced.

  4. Materials Science

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Materials Science Program is structured so that NASA s headquarters is responsible for the program content and selection, through the Enterprise Scientist, and MSFC provides for implementation of ground and flight programs with a Discipline Scientist and Discipline Manager. The Discipline Working Group of eminent scientists from outside of NASA acts in an advisory capacity and writes the Discipline Document from which the NRA content is derived. The program is reviewed approximately every three years by groups such as the Committee on Microgravity Research, the National Materials Advisory Board, and the OBPR Maximization and Prioritization (ReMaP) Task Force. The flight program has had as many as twenty-six principal investigators (PIs) in flight or flight definition stage, with the numbers of PIs in the future dependent on the results of the ReMaP Task Force and internal reviews. Each project has a NASA-appointed Project Scientist, considered a half-time job, who assists the PI in understanding and preparing for internal reviews such as the Science Concept Review and Requirements Definition Review. The Project Scientist also insures that the PI gets the maximum science support from MSFC, represents the PI to the MSFC community, and collaborates with the Project Manager to insure the project is well-supported and remains vital. Currently available flight equipment includes the Materials Science Research Rack (MSRR-1) and Microgravity Science Glovebox. Ground based projects fall into one or more of several categories. Intellectual Underpinning of Flight Program projects include theoretical studies backed by modeling and computer simulations; bring to maturity new research, often by young researchers, and may include preliminary short duration low gravity experiments in the KC-135 aircraft or drop tube; enable characterization of data sets from previous flights; and provide thermophysical property determinations to aid PIs. Radiation Shielding and preliminary In Situ Resource Utilization (ISRU) studies work towards future long duration missions. Biomaterials support materials issues affecting crew health. Nanostructured Materials are currently considered to be maturing new research, and Advanced Materials for Space Transportation has as yet no PIs. PIs are assigned a NASA Technical Monitor to maintain contact, a position considered to be a 5 percent per PI effort. Currently 33 PIs are supported on the 1996 NRA, which is about to expire, and 59 on the 1998 NRA. Two new NRAs, one for Radiation Shielding and one for Materials Science for Advanced Space Propulsion are due to be announced by the 2003 fiscal year. MSFC has a number of facilities supporting materials science. These include the Microgravity Development Laboratory/SD43; Electrostatic Levitator Facility; SCN Purification Facility; Electron Microscope/Microprobe Facility; Static and Rotating Magnetic Field Facility; X-Ray Diffraction Facility; and the Furnace Development Laboratory.

  5. Life science payloads planning study integration facility survey results

    NASA Technical Reports Server (NTRS)

    Wells, G. W.; Brown, N. E.; Nelson, W. G.

    1976-01-01

    The integration facility survey effort described is structured to examine the facility resources needed to conduct life science payload (LSP) integration checkout activities at NASA-JSC. The LSP integration facility operations and functions are defined along with the LSP requirements for facility design. A description of available JSC life science facilities is presented and a comparison of accommodations versus requirements is reported.

  6. Use of a personal computer for the real-time reception and analysis of data from a sounding rocket experiment

    NASA Technical Reports Server (NTRS)

    Herrick, W. D.; Penegor, G. T.; Cotton, D. M.; Kaplan, G. C.; Chakrabarti, S.

    1990-01-01

    In September 1988 the Earth and Planetary Atmospheres Group of the Space Sciences Laboratory of the University of California at Berkeley flew an experiment on a high-altitude sounding rocket launched from the NASA Wallops Flight Facility in Virginia. The experiment, BEARS (Berkeley EUV Airglow Rocket Spectrometer), was designed to obtain spectroscopic data on the composition and structure of the earth's upper atmosphere. Consideration is given to the objectives of the BEARS experiment; the computer interface and software; the use of remote data transmission; and calibration, integration, and flight operations.

  7. Biological and Environmental Research Network Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, V.; Boden, Tom; Cowley, Dave

    2013-09-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet be a highly successful enabler of scientific discovery for over 25 years. In November 2012, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organizedmore » a review to characterize the networking requirements of the programs funded by the BER program office. Several key findings resulted from the review. Among them: 1) The scale of data sets available to science collaborations continues to increase exponentially. This has broad impact, both on the network and on the computational and storage systems connected to the network. 2) Many science collaborations require assistance to cope with the systems and network engineering challenges inherent in managing the rapid growth in data scale. 3) Several science domains operate distributed facilities that rely on high-performance networking for success. Key examples illustrated in this report include the Earth System Grid Federation (ESGF) and the Systems Biology Knowledgebase (KBase). This report expands on these points, and addresses others as well. The report contains a findings section as well as the text of the case studies discussed at the review.« less

  8. The Australian Science Facilities Program: A Study of Its Influence on Science Education in Australian Schools.

    ERIC Educational Resources Information Center

    Ainley, John G.

    This report is a study conducted by the Australian Council for Educational Research to evaluate the influence of science material resources, provided under the Australian Science Facilities Program, on science education in Australia. Under the Australian Science Facilities Program some $123 million was spent, between July 1964 and June 1975, on…

  9. TomoBank: a tomographic data repository for computational x-ray science

    NASA Astrophysics Data System (ADS)

    De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; Joost Batenburg, K.; Ludwig, Wolfgang; Mancini, Lucia; Marone, Federica; Mokso, Rajmund; Pelt, Daniël M.; Sijbers, Jan; Rivers, Mark

    2018-03-01

    There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology have made sub-second and multi-energy tomographic data collection possible (Gibbs et al 2015 Sci. Rep. 5 11824), but have also increased the demand to develop new reconstruction methods able to handle in situ (Pelt and Batenburg 2013 IEEE Trans. Image Process. 22 5238-51) and dynamic systems (Mohan et al 2015 IEEE Trans. Comput. Imaging 1 96-111) that can be quickly incorporated in beamline production software (Gürsoy et al 2014 J. Synchrotron Radiat. 21 1188-93). The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.

  10. Heterolysis of H2 Across a Classical Lewis Pair, 2,6-Lutidine-BCl3: Synthesis, Characterization, and Mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginovska-Pangovska, Bojana; Autrey, Thomas; Parab, Kshitij K.

    We report on a combined computational and experimental study of the activation of hydrogen using for 2,6-lutidine (Lut)/BCl3 Lewis pairs. Herein we describe the synthetic approach used to obtain a new FLP, Lut-BCl3 that activates molecular H2 at ~10 bar, 100 °C in toluene or lutidine as the solvent. The resulting compound is an unexpected neutral hydride, LutBHCl2, rather than the ion pair, which we attribute to ligand redistribution. The mechanism for activation was modeled with density functional theory and accurate G3(MP2)B3 theory. The dative bond in Lut-BCl3 is calculated to have a bond enthalpy of 15 kcal/mol. The separatedmore » pair is calculated to react with H2 and form the [LutH+][HBCl3–] ion pair with a barrier of 13 kcal/mol. Metathesis with LutBCl3 produces LutBHCl2 and [LutH][BCl4]. The overall reaction is exothermic by 8.5 kcal/mol. An alternative pathway was explored involving lutidine–borenium cation pair activating H2. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Division of Chemical Sciences, Biosciences, and Geosciences, and was performed in part using the Molecular Science Computing Facility (MSCF) in the William R. Wiley Environmental Molecular Sciences Laboratory, a DOE national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for DOE.« less

  11. Japan signs Ocean Agreement

    NASA Astrophysics Data System (ADS)

    The Ocean Research Institute of the University of Tokyo and the National Science Foundation (NSF) have signed a Memorandum of Understanding for cooperation in the Ocean Drilling Program (ODP). The agreement calls for Japanese participation in ODP and an annual contribution of $2.5 million in U.S. currency for the project's 9 remaining years, according to NSF.ODP is an international project whose mission is to learn more about the formation and development of the earth through the collection and examination of core samples from beneath the ocean. The program uses the drillship JOIDES Resolution, which is equipped with laboratories and computer facilities. The Joint Oceanographic Institutions for Deep Earth Sampling (JOIDES), an international group of scientists, provides overall science planning and program advice regarding ODP's science goals and objectives.

  12. The Practical Obstacles of Data Transfer: Why researchers still love scp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T

    The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less

  13. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less

  14. HPC Annual Report 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennig, Yasmin

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less

  15. Performance evaluation of the Engineering Analysis and Data Systems (EADS) 2

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1994-01-01

    The Engineering Analysis and Data System (EADS)II (1) was installed in March 1993 to provide high performance computing for science and engineering at Marshall Space Flight Center (MSFC). EADS II increased the computing capabilities over the existing EADS facility in the areas of throughput and mass storage. EADS II includes a Vector Processor Compute System (VPCS), a Virtual Memory Compute System (CFS), a Common Output System (COS), as well as Image Processing Station, Mini Super Computers, and Intelligent Workstations. These facilities are interconnected by a sophisticated network system. This work considers only the performance of the VPCS and the CFS. The VPCS is a Cray YMP. The CFS is implemented on an RS 6000 using the UniTree Mass Storage System. To better meet the science and engineering computing requirements, EADS II must be monitored, its performance analyzed, and appropriate modifications for performance improvement made. Implementing this approach requires tool(s) to assist in performance monitoring and analysis. In Spring 1994, PerfStat 2.0 was purchased to meet these needs for the VPCS and the CFS. PerfStat(2) is a set of tools that can be used to analyze both historical and real-time performance data. Its flexible design allows significant user customization. The user identifies what data is collected, how it is classified, and how it is displayed for evaluation. Both graphical and tabular displays are supported. The capability of the PerfStat tool was evaluated, appropriate modifications to EADS II to optimize throughput and enhance productivity were suggested and implemented, and the effects of these modifications on the systems performance were observed. In this paper, the PerfStat tool is described, then its use with EADS II is outlined briefly. Next, the evaluation of the VPCS, as well as the modifications made to the system are described. Finally, conclusions are drawn and recommendations for future worked are outlined.

  16. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crabtree, George; Glotzer, Sharon; McCurdy, Bill

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. Newmore » materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed.« less

  17. Are Case Studies a Good Teaching Tool for CS1?

    DTIC Science & Technology

    1995-01-01

    old AP/CS tests to compare our students’ performance against the results obtained by ETS. Currently, the introductory courses at CMU are taught using...Carrasquel, J., Goldenson, D. & Miller, P. L. (1985). Competency Testing in Introductory Computer Science: The Mastery Examination at Carnegie Mellon... courses is that many places do not have enough facilities (or the necessary time) required for long programming assignments. In our opinion, using case

  18. SOAR User’s Manual.

    DTIC Science & Technology

    1986-01-31

    <el>) (problem-space (p1) tname eight-puzzle) (state (sI> tblank-binding (bi> tbinding tbinding f 0 b1 )> (operator <o1) tname move-tile...that the ordenng algorithm will use in breaking ties between competing conditions. B1 . increasing the depth, the ordered productions can sometimes be...12 Copies) Computer Science Department Providence, RI 02912 ERIC Facility-Acquisitions 4833 Rugby Avenue Dr. Michelene Chi Bethesda, MD 20014 Learning

  19. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Tu, Eugene L.; Van Dalsem, William R.

    2006-01-01

    Two years ago, NASA was on the verge of dramatically increasing its HEC capability and capacity. With the 10,240-processor supercomputer, Columbia, now in production for 18 months, HEC has an even greater impact within the Agency and extending to partner institutions. Advanced science and engineering simulations in space exploration, shuttle operations, Earth sciences, and fundamental aeronautics research are occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. This talk describes how the integrated production environment fostered at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center is accelerating scientific discovery, achieving parametric analyses of multiple scenarios, and enhancing safety for NASA missions. We focus on Columbia s impact on two key engineering and science disciplines: Aerospace, and Climate. We also discuss future mission challenges and plans for NASA s next-generation HEC environment.

  20. Science Facilities for Mississippi Schools, Grades 1-12.

    ERIC Educational Resources Information Center

    Mississippi State Dept. of Education, Jackson. Div. of Instruction.

    Prepared to assist those planning the construction of new science facilities on the elementary, intermediate, or secondary school level. Standards are outlined and specifications detailed. A statement of fifteen general pricniples for planning science facilities in secondary schools precedes a discussion of--(1) special facilities for different…

  1. Charter for the ARM Climate Research Facility Science Board

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrell, W

    The objective of the ARM Science Board is to promote the Nation’s scientific enterprise by ensuring that the best quality science is conducted at the DOE’s User Facility known as the ARM Climate Research Facility. The goal of the User Facility is to serve scientific researchers by providing unique data and tools to facilitate scientific applications for improving understanding and prediction of climate science.

  2. First-principles characterization of formate and carboxyl adsorption on the stoichiometric CeO2(111) and CeO2(110) surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai

    2013-05-20

    Molecular adsorption of formate and carboxyl on the stoichiometric CeO2(111) and CeO2(110) surfaces was studied using periodic density functional theory (DFT+U) calculations. Two distinguishable adsorption modes (strong and weak) of formate are identified. The bidentate configuration is more stable than the monodentate adsorption configuration. Both formate and carboxyl bind at the more open CeO2(110) surface are stronger. The calculated vibrational frequencies of two adsorbed species are consistent with experimental measurements. Finally, the effects of U parameters on the adsorption of formate and carboxyl over both CeO2 surfaces were investigated. We found that the geometrical configurations of two adsorbed species aremore » not affected by using different U parameters (U=0, 5, and 7). However, the calculated adsorption energy of carboxyl pronouncedly increases with the U value while the adsorption energy of formate only slightly changes (<0.2 eV). The Bader charge analysis shows the opposite charge transfer occurs for formate and carboxyl adsorption where the adsorbed formate is negatively charge whiled the adsorbed carboxyl is positively charged. Interestingly, with the increasing U parameter, the amount of charge is also increased. This work was supported by the Laboratory Directed Research and Development (LDRD) project of the Pacific Northwest National Laboratory (PNNL) and by a Cooperative Research and Development Agreement (CRADA) with General Motors. The computations were performed using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at PNNL in Richland, Washington. Part of the computing time was also granted by the National Energy Research Scientific Computing Center (NERSC)« less

  3. Exploring the role of pendant amines in transition metal complexes for the reduction of N2 to hydrazine and ammonia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Prokopchuk, Demyan E.; Mock, Michael T.

    2017-03-01

    This review examines the synthesis and acid reactivity of transition metal dinitrogen complexes bearing diphosphine ligands containing pendant amine groups in the second coordination sphere. This manuscript is a review of the work performed in the Center for Molecular Electrocatalysis. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences. EPR studies on Fe were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located atmore » PNNL. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. DOE.« less

  4. ASCR Cybersecurity for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piesert, Sean

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE tomore » execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.« less

  5. Early development of Science Opportunity Analysis tools for the Jupiter Icy Moons Explorer (JUICE) mission

    NASA Astrophysics Data System (ADS)

    Cardesin Moinelo, Alejandro; Vallat, Claire; Altobelli, Nicolas; Frew, David; Llorente, Rosario; Costa, Marc; Almeida, Miguel; Witasse, Olivier

    2016-10-01

    JUICE is the first large mission in the framework of ESA's Cosmic Vision 2015-2025 program. JUICE will survey the Jovian system with a special focus on three of the Galilean Moons: Europa, Ganymede and Callisto.The mission has recently been adopted and big efforts are being made by the Science Operations Center (SOC) at the European Space and Astronomy Centre (ESAC) in Madrid for the development of tools to provide the necessary support to the Science Working Team (SWT) for science opportunity analysis and early assessment of science operation scenarios. This contribution will outline some of the tools being developed within ESA and in collaboration with the Navigation and Ancillary Information Facility (NAIF) at JPL.The Mission Analysis and Payload Planning Support (MAPPS) is developed by ESA and has been used by most of ESA's planetary missions to generate and validate science observation timelines for the simulation of payload and spacecraft operations. MAPPS has the capability to compute and display all the necessary geometrical information such as the distances, illumination angles and projected field-of-view of an imaging instrument on the surface of the given body and a preliminary setup is already in place for the early assessment of JUICE science operations.NAIF provides valuable SPICE support to the JUICE mission and several tools are being developed to compute and visualize science opportunities. In particular the WebGeoCalc and Cosmographia systems are provided by NAIF to compute time windows and create animations of the observation geometry available via traditional SPICE data files, such as planet orbits, spacecraft trajectory, spacecraft orientation, instrument field-of-view "cones" and instrument footprints. Other software tools are being developed by ESA and other collaborating partners to support the science opportunity analysis for all missions, like the SOLab (Science Operations Laboratory) or new interfaces for observation definitions and opportunity window databases.

  6. NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community

    NASA Astrophysics Data System (ADS)

    Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.

    2017-12-01

    The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.

  7. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  8. ISCR Annual Report: Fical Year 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, J R

    2005-03-03

    Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technologymore » enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and long-term visits with the aim of encouraging long-term academic research agendas that address LLNL's research priorities. Through such collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''feet and hands'' that carry those advances into the Laboratory and incorporates them into practice. ISCR research participants are integrated into LLNL's Computing and Applied Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other five institutes of the URP, it navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort.« less

  9. Strengthening programs in science, engineering and mathematics. Third annual progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, S.S.

    1997-09-30

    The Division of Natural Sciences and Mathematics at Claflin College consists of the Departments of Biology, Chemistry, Computer Science, Physics, Engineering and Mathematics. It offers a variety of major and minor academic programs designed to meet the mission and objectives of the college. The division`s pursuit to achieve excellence in science education is adversely impacted by the poor academic preparation of entering students and the lack of equipment, facilities and research participation, required to impart adequate academic training and laboratory skills to the students. Funds were received from the US Department of Energy to improve the divisional facilities and laboratorymore » equipment and establish mechanism at pre-college and college levels to increase (1) the pool of high school students who will enroll in Science and Mathematics courses (2) the pool of well qualified college freshmen who will seek careers in Science, Engineering and Mathematics (3) the graduation rate in Science,engineering and Mathematics at the undergraduate level and (4) the pool of well-qualified students who can successfully compete to enter the graduate schools of their choice in the fields of science, engineering, and mathematics. The strategies that were used to achieve the mentioned objectives include: (1) Improved Mentoring and Advisement, (2) Summer Science Camp for 7th and 8th graders, (3) Summer Research Internships for Claflin SEM Seniors, (4) Summer Internships for Rising High School Seniors, (5) Development of Mathematical Skills at Pre-college/Post-secondary Levels, (6) Expansion of Undergraduate Seminars, (7) Exposure of Undergraduates to Guest Speakers/Roll Models, (8) Visitations by Undergraduate Students to Graduate Schools, and (9) Expanded Academic Program in Environmental Chemistry.« less

  10. Automating CapCom: Pragmatic Operations and Technology Research for Human Exploration of Mars

    NASA Technical Reports Server (NTRS)

    Clancey, William J.

    2003-01-01

    During the Apollo program, NASA and the scientific community used terrestrial analog sites for understanding planetary features and for training astronauts to be scientists. More recently, computer scientists and human factors specialists have followed geologists and biologists into the field, learning how science is actually done on expeditions in extreme environments. Research stations have been constructed by the Mars Society in the Arctic and American southwest, providing facilities for hundreds of researchers to investigate how small crews might live and work on Mars. Combining these interests-science, operations, and technology-in Mars analog field expeditions provides tremendous synergy and authenticity to speculations about Mars missions. By relating historical analyses of Apollo and field science, engineers are creating experimental prototypes that provide significant new capabilities, such as a computer system that automates some of the functions of Apollo s CapCom. Thus, analog studies have created a community of practice-a new collaboration between scientists and engineers-so that technology begins with real human needs and works incrementally towards the challenges of the human exploration of Mars.

  11. Mechanisms and Dynamics of Abiotic and Biotic Interactions at Environmental Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roso, Kevin M.

    The Stanford EMSI (SEMSI) was established in 2004 through joint funding by the National Science Foundation and the OBER-ERSD. It encompasses a number of universities and national laboratories. The PNNL component of the SEMSI is funded by ERSD and is the focus of this report. This component has the objective of providing theory support to the SEMSI by bringing computational capabilities and expertise to bear on important electron transfer problems at mineral/water and mineral/microbe interfaces. PNNL staff member Dr. Kevin Rosso, who is also ''matrixed'' into the Environmental Molecular Sciences Laboratory (EMSL) at PNNL, is a co-PI on the SEMSImore » project and the PNNL lead. The EMSL computational facilities being applied to the SEMSI project include the 11.8 teraflop massively-parallel supercomputer. Science goals of this EMSL/SEMSI partnership include advancing our understanding of: (1) The kinetics of U(VI) and Cr(VI) reduction by aqueous and solid-phase Fe(II), (2) The structure of mineral surfaces in equilibrium with solution, and (3) Mechanisms of bacterial electron transfer to iron oxide surfaces via outer-membrane cytochromes.« less

  12. International Instrumentation Symposium, 34th, Albuquerque, NM, May 2-6, 1988, Proceedings

    NASA Astrophysics Data System (ADS)

    Various papers on aerospace instrumentation are presented. The general topics addressed include: blast and shock, wind tunnel instrumentations and controls, digital/optical sensors, software design/development, special test facilities, fiber optic techniques, electro/fiber optical measurement systems, measurement uncertainty, real time systems, pressure. Also discussed are: flight test and avionics instrumentation, data acquisition techniques, computer applications, thermal force and displacement, science and government, modeling techniques, reentry vehicle testing, strain and pressure.

  13. The Microgravity Science Glovebox

    NASA Technical Reports Server (NTRS)

    Baugher, Charles R.; Primm, Lowell (Technical Monitor)

    2001-01-01

    The Microgravity Science Glovebox (MSG) provides scientific investigators the opportunity to implement interactive experiments on the International Space Station. The facility has been designed around the concept of an enclosed scientific workbench that allows the crew to assemble and operate an experimental apparatus with participation from ground-based scientists through real-time data and video links. Workbench utilities provided to operate the experiments include power, data acquisition, computer communications, vacuum, nitrogen. and specialized tools. Because the facility work area is enclosed and held at a negative pressure with respect to the crew living area, the requirements on the experiments for containment of small parts, particulates, fluids, and gasses are substantially reduced. This environment allows experiments to be constructed in close parallel with bench type investigations performed in groundbased laboratories. Such an approach enables experimental scientists to develop hardware that more closely parallel their traditional laboratory experience and transfer these experiments into meaningful space-based research. When delivered to the ISS the MSG will represent a significant scientific capability that will be continuously available for a decade of evolutionary research.

  14. Data Crosscutting Requirements Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleese van Dam, Kerstin; Shoshani, Arie; Plata, Charity

    2013-04-01

    In April 2013, a diverse group of researchers from the U.S. Department of Energy (DOE) scientific community assembled to assess data requirements associated with DOE-sponsored scientific facilities and large-scale experiments. Participants in the review included facilities staff, program managers, and scientific experts from the offices of Basic Energy Sciences, Biological and Environmental Research, High Energy Physics, and Advanced Scientific Computing Research. As part of the meeting, review participants discussed key issues associated with three distinct aspects of the data challenge: 1) processing, 2) management, and 3) analysis. These discussions identified commonalities and differences among the needs of varied scientific communities.more » They also helped to articulate gaps between current approaches and future needs, as well as the research advances that will be required to close these gaps. Moreover, the review provided a rare opportunity for experts from across the Office of Science to learn about their collective expertise, challenges, and opportunities. The "Data Crosscutting Requirements Review" generated specific findings and recommendations for addressing large-scale data crosscutting requirements.« less

  15. Sharing Responsibility for Data Stewardship Between Scientists and Curators

    NASA Astrophysics Data System (ADS)

    Hedstrom, M. L.

    2012-12-01

    Data stewardship is becoming increasingly important to support accurate conclusions from new forms of data, integration of and computation across heterogeneous data types, interactions between models and data, replication of results, data governance and long-term archiving. In addition to increasing recognition of the importance of data management, data science, and data curation by US and international scientific agencies, the National Academies of Science Board on Research Data and Information is sponsoring a study on Data Curation Education and Workforce Issues. Effective data stewardship requires a distributed effort among scientists who produce data, IT staff and/or vendors who provide data storage and computational facilities and services, and curators who enhance data quality, manage data governance, provide access to third parties, and assume responsibility for long-term archiving of data. The expertise necessary for scientific data management includes a mix of knowledge of the scientific domain; an understanding of domain data requirements, standards, ontologies and analytical methods; facility with leading edge information technology; and knowledge of data governance, standards, and best practices for long-term preservation and access that rarely are found in a single individual. Rather than developing data science and data curation as new and distinct occupations, this paper examines the set of tasks required for data stewardship. The paper proposes an alternative model that embeds data stewardship in scientific workflows and coordinates hand-offs between instruments, repositories, analytical processing, publishers, distributors, and archives. This model forms the basis for defining knowledge and skill requirements for specific actors in the processes required for data stewardship and the corresponding educational and training needs.

  16. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  17. Computationally intensive econometrics using a distributed matrix-programming language.

    PubMed

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  18. UCSB FEL user-mode adaption project. Final report, 1 Jan 86-31 Dec 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaccarino, V.

    1992-04-14

    This research sponsored by the SDIO Biomedical and Materials Sciences FEL Program held the following objectives. Provide a facility in which in-house and outside user research in the materials and biological sciences can be carried out in the Far Infrared using-the unique properties of the UCSB electrostatic accelerator-driven FEL. Develop and implement new FEL concepts and FIR technology and encourage the transfer and application of this research. Train graduate students, post doctoral researchers and technical personnel in varied aspects of scientific user disciplines, FEL science and FIR technology in a cooperative, interdisciplinary environment. In summary, a free electron laser facilitymore » has been developed which is operational from 200 GH z, (6.6 cm -1), to 4.8 THz, (160 cm-1) tunable under computer control and able to deliver kilowatts of millimeter wave and far-infrared power. This facility has a well equipped user lab that has been used to perform ground breaking experiments in scientific areas as diverse as bio-physics. Nine graduate students and post doctoral researchers have been trained in the operation, use and application of these free-electron lasers.« less

  19. Remote monitoring of a Fire Protection System

    NASA Astrophysics Data System (ADS)

    Bauman, Steven; Vermeulen, Tom; Roberts, Larry; Matsushige, Grant; Gajadhar, Sarah; Taroma, Ralph; Elizares, Casey; Arruda, Tyson; Potter, Sharon; Hoffman, James

    2011-03-01

    Some years ago CFHT proposed developing a Remote Observing Environment aimed at producing Science Observations at their Observatory Facility on Mauna Kea from their Headquarters facility in Waimea, HI. This Remote Observing Project commonly referred to as OAP (Observatory Automation Project) was completed at the end of January 2011 and has been providing the majority of Science Data since. My poster will discuss the upgrades to the existing fire alarm protection system. With no one at the summit during nightly operations, the observatory facility required automated monitoring of the facility for safety to personnel and equipment in the case of a fire. An addressable analog fire panel was installed which utilizes digital communication protocol (DCP), intelligent communication with other devices, and an RS-232 interface which provides feedback and real-time monitoring of the system. Using the interface capabilities of the panel, it provides notifications when heat detectors, smoke sensors, manual pull stations, or the main observatory computer room fire suppression system has been activated. The notifications are sent out as alerts to staff in the form of test massages and emails and the observing control GUI interface alerts the remote telescope operator with a map showing the location of the fire occurrence and type of device that has been triggered. And all of this was accomplished without the need for an outside vendor to monitor the system and facilitate warnings or notifications regarding the system.

  20. Computer-Aided Facilities Management Systems (CAFM).

    ERIC Educational Resources Information Center

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  1. Facilitymetrics for Big Ocean Science: Towards Improved Measurement of Scientific Impact

    NASA Astrophysics Data System (ADS)

    Juniper, K.; Owens, D.; Moran, K.; Pirenne, B.; Hallonsten, O.; Matthews, K.

    2016-12-01

    Cabled ocean observatories are examples of "Big Science" facilities requiring significant public investments for installation and ongoing maintenance. Large observatory networks in Canada and the United States, for example, have been established after extensive up-front planning and hundreds of millions of dollars in start-up costs. As such, they are analogous to particle accelerators and astronomical observatories, which may often be required to compete for public funding in an environment of ever-tightening national science budget allocations. Additionally, the globalization of Big Science compels these facilities to respond to increasing demands for demonstrable productivity, excellence and competitiveness. How should public expenditures on "Big Science" facilities be evaluated and justified in terms of benefits to the countries that invest in them? Published literature counts are one quantitative measure often highlighted in the annual reports of large science facilities. But, as recent research has demonstrated, publication counts can lead to distorted characterizations of scientific impact, inviting evaluators to calculate scientific outputs in terms of costs per publication—a ratio that can be simplistically misconstrued to conclude Big Science is wildly expensive. Other commonly promoted measurements of Big Science facilities include technical reliability (a.k.a. uptime), provision of training opportunities for Highly Qualified Personnel, generation of commercialization opportunities, and so forth. "Facilitymetrics" is a new empirical focus for scientometrical studies, which has been applied to the evaluation and comparison of synchrotron facilities. This paper extends that quantitative and qualitative examination to a broader inter-disciplinary comparison of Big Science facilities in the ocean science realm to established facilities in the fields of astronomy and particle physics.

  2. Facilitymetrics for Big Ocean Science: Towards Improved Measurement of Scientific Impact

    NASA Astrophysics Data System (ADS)

    Juniper, K.; Owens, D.; Moran, K.; Pirenne, B.; Hallonsten, O.; Matthews, K.

    2016-02-01

    Cabled ocean observatories are examples of "Big Science" facilities requiring significant public investments for installation and ongoing maintenance. Large observatory networks in Canada and the United States, for example, have been established after extensive up-front planning and hundreds of millions of dollars in start-up costs. As such, they are analogous to particle accelerators and astronomical observatories, which may often be required to compete for public funding in an environment of ever-tightening national science budget allocations. Additionally, the globalization of Big Science compels these facilities to respond to increasing demands for demonstrable productivity, excellence and competitiveness. How should public expenditures on "Big Science" facilities be evaluated and justified in terms of benefits to the countries that invest in them? Published literature counts are one quantitative measure often highlighted in the annual reports of large science facilities. But, as recent research has demonstrated, publication counts can lead to distorted characterizations of scientific impact, inviting evaluators to calculate scientific outputs in terms of costs per publication—a ratio that can be simplistically misconstrued to conclude Big Science is wildly expensive. Other commonly promoted measurements of Big Science facilities include technical reliability (a.k.a. uptime), provision of training opportunities for Highly Qualified Personnel, generation of commercialization opportunities, and so forth. "Facilitymetrics" is a new empirical focus for scientometrical studies, which has been applied to the evaluation and comparison of synchrotron facilities. This paper extends that quantitative and qualitative examination to a broader inter-disciplinary comparison of Big Science facilities in the ocean science realm to established facilities in the fields of astronomy and particle physics.

  3. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  4. Promotional effect of surface hydroxyls on electrochemical reduction of CO2 over SnOx/Sn electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Chaonan; Han, Jinyu; Zhu, Xinli

    Tin oxide (SnOx) formation on tin-based electrode surfaces during CO2 electrochemical reduction can have a significant impact on the activity and selectivity of the reaction. In the present study, density functional theory (DFT) calculations have been performed to understand the role of SnOx in CO2 reduction using a SnO monolayer on the Sn(112) surface as a model for SnOx. Water molecules have been treated explicitly and considered actively participating in the reaction. The results showed that H2O dissociates on the perfect SnO monolayer into two hydroxyl groups symmetrically on the surface. CO2 energetically prefers to react with the hydroxyl, formingmore » a bicarbonate (HCO3(t)*) intermediate, which can then be reduced to either formate (HCOO*) by hydrogenating the carbon atom or carboxyl (COOH*) by protonating the oxygen atom. Both steps involve a simultaneous C-O bond breaking. Further reduction of HCOO* species leads to the formation of formic acid in the acidic solution at pH < 4, while the COOH* will decompose to CO and H2O via protonation. Whereas the oxygen vacancy (VO) in the monolayer maybe formed by the reduction of the monolayer, it can be recovered by H2O dissociation, resulting in two embedded hydroxyl groups. However, the hydroxylated surface with two symmetric hydroxyls is energetically more favorable for CO2 reduction than the hydroxylated VO surface with two embedded hydroxyls. The reduction potential for the former has a limiting-potential of -0.20 V (RHE), lower than that for the latter (-0.74 V (RHE)). Compared to the pure Sn electrode, the formation of SnOx monolayer on the electrode under the operating conditions promotes CO2 reduction more effectively by forming surface hydroxyls, thereby, providing a new channel via COOH* to the CO formation, although formic acid is still the major reduction product. The work was supported in part by National Natural Sciences Foundation of China (Grant #21373148 and #21206117). The High Performance Computing Center of Tianjin University is acknowledged for providing services to the computing cluster. CC acknowledges the support of 24 China Scholarship Council (CSC). QG acknowledges the support of NSF-CBET program (Award no. CBET-1438440). DM was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. The computations were performed in part using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at Pacific Northwest National Laboratory (PNNL) in Richland, Washington.« less

  5. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  6. Active Flow Control in an Aggressive Transonic Diffuser

    NASA Astrophysics Data System (ADS)

    Skinner, Ryan W.; Jansen, Kenneth E.

    2017-11-01

    A diffuser exchanges upstream kinetic energy for higher downstream static pressure by increasing duct cross-sectional area. The resulting stream-wise and span-wise pressure gradients promote extensive separation in many diffuser configurations. The present computational work evaluates active flow control strategies for separation control in an asymmetric, aggressive diffuser of rectangular cross-section at inlet Mach 0.7 and Re 2.19M. Corner suction is used to suppress secondary flows, and steady/unsteady tangential blowing controls separation on both the single ramped face and the opposite flat face. We explore results from both Spalart-Allmaras RANS and DDES turbulence modeling frameworks; the former is found to miss key physics of the flow control mechanisms. Simulated baseline, steady, and unsteady blowing performance is validated against experimental data. Funding was provided by Northrop Grumman Corporation, and this research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

  7. Use of Computed Tomography for Characterizing Materials Grown Terrestrially and in Microgravity

    NASA Technical Reports Server (NTRS)

    Gillies, Donald C.; Engel, H. P.

    2001-01-01

    The purpose behind this work is to provide NASA Principal Investigators (PIs) rapid information, nondestructively, about their samples. This information will be in the form of density values throughout the samples, especially within slices 1 mm high. With correct interpretation and good calibration, these values will enable the PI to obtain macro chemical compositional analysis for his/her samples. Alternatively, the technique will provide information about the porosity level and its distribution within the sample. Experience gained with a NASA Microgravity Research Division-sponsored Advanced Technology Development (ATD) project on this topic has brought the technique to a level of maturity at which it has become a viable characterization tool for many of the Materials Science Pls, but with equipment that could never be supported within their own facilities. The existing computed tomography (CT) facility at NASA's Kennedy Space Center (KSC) is ideally situated to furnish information rapidly and conveniently to PIs, particularly immediately before and after flight missions.

  8. Use of Computed Tomography for Characterizing Materials Grown Terrestrially and in Microgravity

    NASA Technical Reports Server (NTRS)

    Gillies, Donald C.; Engel, H. P.

    2000-01-01

    The purpose behind this work is to provide NASA Principal Investigators (PI) rapid information, non-destructively, about their samples. This information will be in the form of density values throughout the samples, especially within slices 1 mm high. With correct interpretation and good calibration, these values will enable the PI to obtain macro chemical compositional analysis for his/her samples. Alternatively, the technique will provide information about the porosity level and its distribution within the sample. Experience gained with a NASA MRD-sponsored Advanced Technology Development (ATD) project on this topic has brought the technique to a level of maturity at which it has become a viable characterization tool for many of the Materials Science PIs, but with equipment that could never be supported within their own facilities. The existing computed tomography (CT) facility at NASA's Kennedy Space Center (KSC) is ideally situated to furnish information rapidly and conveniently to PIs, particularly immediately before and after flight missions.

  9. Office of Science User Facilities Summary Report, Fiscal Year 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-01-01

    The U.S. Department of Energy Office of Science provides the Nation’s researchers with worldclass scientific user facilities to propel the U.S. to the forefront of science and innovation. A user facility is a federally sponsored research facility available for external use to advance scientific or technical knowledge under the following conditions: open, accessible, free, collaborative, competitive, and unique.

  10. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  11. Accelerating scientific discovery : 2007 annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P.; Dave, P.; Drugan, C.

    2008-11-14

    As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis ofmore » Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide guidance for applications that are transitioning to petascale as well as to produce software that facilitates their development, such as the MPICH library, which provides a portable and efficient implementation of the MPI standard--the prevalent programming model for large-scale scientific applications--and the PETSc toolkit that provides a programming paradigm that eases the development of many scientific applications on high-end computers.« less

  12. Snapshots of Proton Accommodation at a Microscopic Water Surface: Understanding the Vibrational Spectral Signatures of the Charge Defect in Cryogenically Cooled H+(H2O)n=2 – 28 Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Joseph A.; Wolke, Conrad T.; Johnson, Mark A.

    In this Article, we review the role of gas-phase, size-selected protonated water clusters, H+(H2O)n, in the analysis of the microscopic mechanics responsible for the behavior of the excess proton in bulk water. We extend upon previous studies of the smaller, two-dimensional sheet-like structures to larger (n≥10) assemblies with three-dimensional cage morphologies which better mimic the bulk environment. Indeed, clusters in which a complete second solvation shell forms around a surface-embedded hydronium ion yield vibrational spectra where the signatures of the proton defect display strikingly similar positions and breadth to those observed in dilute acids. We investigate effects of the localmore » structure and intermolecular interactions on the large red shifts observed in the proton vibrational signature upon cluster growth using various theoretical methods. We show that, in addition to sizeable anharmonic couplings, the position of the excess proton vibration can be traced to large increases in the electric field exerted on the embedded hydronium ion upon formation of the first and second solvation shells. MAJ acknowledges support from the U.S. Department of Energy under Grant No. DE-FG02- 06ER15800 as well as the facilities and staff of the Yale University Faculty of Arts and Sciences High Performance Computing Center, and by the National Science Foundation under Grant No. CNS 08-21132 that partially funded acquisition of the facilities. SMK and SSX acknowledge support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.« less

  13. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  14. The relationship between science classroom facility conditions and ninth grade students' attitudes toward science

    NASA Astrophysics Data System (ADS)

    Ford, Angela Y.

    Over half of the school facilities in America are in poor condition. Unsatisfactory school facilities have a negative impact on teaching and learning. The purpose of this correlational study was to identify the relationship between high school science teachers' perceptions of the school science environment (instructional equipment, demonstration equipment, and physical facilities) and ninth grade students' attitudes about science through their expressed enjoyment of science, importance of time spent on science, and boredom with science. A sample of 11,523 cases was extracted, after a process of data mining, from a databank of over 24,000 nationally representative ninth graders located throughout the United States. The instrument used to survey these students was part of the High School Longitudinal Study of 2009 (HSLS:2009). The research design was multiple linear regression. The results showed a significant relationship between the science classroom conditions and students' attitudes. Demonstration equipment and physical facilities were the best predictors of effects on students' attitudes. Conclusions based on this study and recommendations for future research are made.

  15. Astronomy and astrophysics for the 1980's. Volume 1 - Report of the Astronomy Survey Committee. Volume 2 - Reports of the Panels

    NASA Astrophysics Data System (ADS)

    Recommended priorities for astronomy and astrophysics in the 1980s are considered along with the frontiers of astrophysics, taking into account large-scale structure in the universe, the evolution of galaxies, violent events, the formation of stars and planets, solar and stellar activity, astronomy and the forces of nature, and planets, life, and intelligence. Approved, continuing, and previously recommended programs are related to the Space Telescope and the associated Space Telescope Science Institute, second-generation instrumentation for the Space Telescope, and Gamma Ray Observatory, facilities for the detection of solar neutrinos, and the Shuttle Infrared Telescope Facility. Attention is given to the prerequisites for new research initiatives, new programs, programs for study and development, high-energy astrophysics, radio astronomy, theoretical and laboratory astrophysics, data processing and computational facilities, organization and education, and ultraviolet, optical, and infrared astronomy.

  16. White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.

    1986-01-01

    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.

  17. Science Facilities Bibliography.

    ERIC Educational Resources Information Center

    National Science Foundation, Washington, DC.

    A bibliographic collection on science buildings and facilities is cited with many different reference sources for those concerned with the design, planning, and layout of science facilities. References are given covering a broad scope of information on--(1) physical plant planning, (2) management and safety, (3) building type studies, (4) design…

  18. Preparation for microgravity - The role of the Microgravity Material Science Laboratory

    NASA Technical Reports Server (NTRS)

    Johnston, J. Christopher; Rosenthal, Bruce N.; Meyer, Maryjo B.; Glasgow, Thomas K.

    1988-01-01

    Experiments at the NASA Lewis Research Center's Microgravity Material Science Laboratory using physical and mathematical models to delineate the effects of gravity on processes of scientific and commercial interest are discussed. Where possible, transparent model systems are used to visually track convection, settling, crystal growth, phase separation, agglomeration, vapor transport, diffusive flow, and polymer reactions. Materials studied include metals, alloys, salts, glasses, ceramics, and polymers. Specific technologies discussed include the General Purpose furnace used in the study of metals and crystal growth, the isothermal dendrite growth apparatus, the electromagnetic levitator/instrumented drop tube, the high temperature directional solidification furnace, the ceramics and polymer laboratories and the center's computing facilities.

  19. WFIRST: Microlensing Analysis Data Challenge

    NASA Astrophysics Data System (ADS)

    Street, Rachel; WFIRST Microlensing Science Investigation Team

    2018-01-01

    WFIRST will produce thousands of high cadence, high photometric precision lightcurves of microlensing events, from which a wealth of planetary and stellar systems will be discovered. However, the analysis of such lightcurves has historically been very time consuming and expensive in both labor and computing facilities. This poses a potential bottleneck to deriving the full science potential of the WFIRST mission. To address this problem, the WFIRST Microlensing Science Investigation Team designing a series of data challenges to stimulate research to address outstanding problems of microlensing analysis. These range from the classification and modeling of triple lens events to methods to efficiently yet thoroughly search a high-dimensional parameter space for the best fitting models.

  20. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  1. Science and Technology Facility | Photovoltaic Research | NREL

    Science.gov Websites

    - and back-contact schemes for advanced thin-film PV solar cells. Contact materials include metals Science and Technology Facility Science and Technology Facility Solar cell, thin-film, and Development Laboratory Research in thin-film PV is accomplished in this lab with techniques used for

  2. Family and Consumer Sciences: A Facility Planning and Design Guide for School Systems.

    ERIC Educational Resources Information Center

    Maryland State Dept. of Education, Baltimore.

    This document presents design concepts and considerations for planning and developing middle and high school family and consumer sciences education facilities. It includes discussions on family and consumer sciences education trends and the facility planning process. Design concepts explore multipurpose laboratories and spaces for food/nutrition…

  3. DOE Network 2025: Network Research Problems and Challenges for DOE Scientists. Workshop Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2016-02-01

    The growing investments in large science instruments and supercomputers by the US Department of Energy (DOE) hold enormous promise for accelerating the scientific discovery process. They facilitate unprecedented collaborations of geographically dispersed teams of scientists that use these resources. These collaborations critically depend on the production, sharing, moving, and management of, as well as interactive access to, large, complex data sets at sites dispersed across the country and around the globe. In particular, they call for significant enhancements in network capacities to sustain large data volumes and, equally important, the capabilities to collaboratively access the data across computing, storage, andmore » instrument facilities by science users and automated scripts and systems. Improvements in network backbone capacities of several orders of magnitude are essential to meet these challenges, in particular, to support exascale initiatives. Yet, raw network speed represents only a part of the solution. Indeed, the speed must be matched by network and transport layer protocols and higher layer tools that scale in ways that aggregate, compose, and integrate the disparate subsystems into a complete science ecosystem. Just as important, agile monitoring and management services need to be developed to operate the network at peak performance levels. Finally, these solutions must be made an integral part of the production facilities by using sound approaches to develop, deploy, diagnose, operate, and maintain them over the science infrastructure.« less

  4. Nuclear Science Symposium, 31st and Symposium on Nuclear Power Systems, 16th, Orlando, FL, October 31-November 2, 1984, Proceedings

    NASA Technical Reports Server (NTRS)

    Biggerstaff, J. A. (Editor)

    1985-01-01

    Topics related to physics instrumentation are discussed, taking into account cryostat and electronic development associated with multidetector spectrometer systems, the influence of materials and counting-rate effects on He-3 neutron spectrometry, a data acquisition system for time-resolved muscle experiments, and a sensitive null detector for precise measurements of integral linearity. Other subjects explored are concerned with space instrumentation, computer applications, detectors, instrumentation for high energy physics, instrumentation for nuclear medicine, environmental monitoring and health physics instrumentation, nuclear safeguards and reactor instrumentation, and a 1984 symposium on nuclear power systems. Attention is given to the application of multiprocessors to scientific problems, a large-scale computer facility for computational aerodynamics, a single-board 32-bit computer for the Fastbus, the integration of detector arrays and readout electronics on a single chip, and three-dimensional Monte Carlo simulation of the electron avalanche in a proportional counter.

  5. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  6. Materials Science Research Rack-1 (MSRR-1)

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. Key elements are labeled in other images (0101754, 0101829, 0101830, and TBD).

  7. Materials Science Research Rack-1 (MSRR-1)

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. A larger image is available without labels (No. 0101755).

  8. Materials Science Research Rack-1 (MSRR-1)

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. Key elements are labeled in other images (0101754, 0101830, and TBD).

  9. Materials Science Research Rack-1 (MSRR-1)

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. Key elements are labeled in other images (0101754, 0101829, 0101830).

  10. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less

  11. The I.P. Stanback Museum and Planetarium: Where Civil Rights and Arts Encounter Science and Humanities

    NASA Astrophysics Data System (ADS)

    Mayo, Elizabeth A.; Zisholtz, E. N.

    2009-01-01

    The I.P. Stanback Museum and Planetarium ("The Stanback") is an embodiment of South Carolina State University's commitment to community service, enhancing the appreciation of both the Arts and Sciences in a single facility. As the only facility of its kind on the campus of a Historically Black College and University, the Museum's programs include aesthetic appreciation, historical and didactic information, and scientific and technological presentations that encourage the development of critical thinking and creative skills for its student and adult constituencies. The Planetarium at the Stanback is a multi-faceted facility that provides a unique learning opportunity for students, faculty and staff at South Carolina State University, K-12 students in the surrounding community, and members of the community at large. With the ability to accommodate up to 82 visitors, the planetarium is a wonderful educational resource. It features a Minolta IIB Planetarium Star Projector that projects 4000 stars onto the 40-foot domed ceiling and can simulate the evening sky for any time, date, and place. The planetarium is slated to be completely upgraded with plans to feature computer automation and full-dome video capabilities. Support for this work was provided by the NSF PAARE program to South Carolina State University under award AST-0750814.

  12. Mars mission science operations facilities design

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Wales, Roxana; Powell, Mark W.; Backes, Paul G.; Steinke, Robert C.

    2002-01-01

    A variety of designs for Mars rover and lander science operations centers are discussed in this paper, beginning with a brief description of the Pathfinder science operations facility and its strengths and limitations. Particular attention is then paid to lessons learned in the design and use of operations facilities for a series of mission-like field tests of the FIDO prototype Mars rover. These lessons are then applied to a proposed science operations facilities design for the 2003 Mars Exploration Rover (MER) mission. Issues discussed include equipment selection, facilities layout, collaborative interfaces, scalability, and dual-purpose environments. The paper concludes with a discussion of advanced concepts for future mission operations centers, including collaborative immersive interfaces and distributed operations. This paper's intended audience includes operations facility and situation room designers and the users of these environments.

  13. ASCR/HEP Exascale Requirements Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  14. ASCR/HEP Exascale Requirements Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; et al.

    2016-03-30

    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  15. Science and Technology Facilities

    ERIC Educational Resources Information Center

    Moonen, Jean-Marie; Buono, Nicolas; Handfield, Suzanne

    2004-01-01

    These four articles relate to science and technology infrastructure for secondary and tertiary institutions. The first article presents a view on approaches to teaching science in school and illustrates ideal science facilities for secondary education. The second piece reports on work underway to improve the Science Complex at the "Universite…

  16. Middle school children's game playing preferences: Case studies of children's experiences playing and critiquing science-related educational games

    NASA Astrophysics Data System (ADS)

    Joseph, Dolly Rebecca Doran

    The playing of computer games is one of the most popular non-school activities of children, particularly boys, and is often the entry point to greater facility with and use of other computer applications. Children are learning skills as they play, but what they learn often does not generalize beyond application to that and other similar games. Nevertheless, games have the potential to develop in students the knowledge and skills described by national and state educational standards. This study focuses upon middle-school aged children, and how they react to and respond to computer games designed for entertainment and educational purposes, within the context of science learning. Through qualitative, case study methodology, the game play, evaluation, and modification experiences of four diverse middle-school-aged students in summer camps are analyzed. The inquiry focused on determining the attributes of computer games that appeal to middle school students, the aspects of science that appeal to middle school children, and ultimately, how science games might be designed to appeal to middle school children. Qualitative data analysis led to the development of a method for describing players' activity modes during game play, rather than the conventional methods that describe game characteristics. These activity modes are used to describe the game design preferences of the participants. Recommendations are also made in the areas of functional, aesthetic, and character design and for the design of educational games. Middle school students may find the topical areas of forensics, medicine, and the environment to be of most interest; designing games in and across these topic areas has the potential for encouraging voluntary science-related play. Finally, when including children in game evaluation and game design activities, results suggest the value of providing multiple types of activities in order to encourage the full participation of all children.

  17. Laboratory Directed Research and Development Annual Report for 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Pamela J.

    This report documents progress made on all LDRD-funded projects during fiscal year 2009. As a US Department of Energy (DOE) Office of Science (SC) national laboratory, Pacific Northwest National Laboratory (PNNL) has an enduring mission to bring molecular and environmental sciences and engineering strengths to bear on DOE missions and national needs. Their vision is to be recognized worldwide and valued nationally for leadership in accelerating the discovery and deployment of solutions to challenges in energy, national security, and the environment. To achieve this mission and vision, they provide distinctive, world-leading science and technology in: (1) the design and scalablemore » synthesis of materials and chemicals; (2) climate change science and emissions management; (3) efficient and secure electricity management from generation to end use; and (4) signature discovery and exploitation for threat detection and reduction. PNNL leadership also extends to operating EMSL: the Environmental Molecular Sciences Laboratory, a national scientific user facility dedicated to providing itnegrated experimental and computational resources for discovery and technological innovation in the environmental molecular sciences.« less

  18. Data Base Management Systems Panel Workshop: Executive summary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Data base management systems (DBMS) for space acquired and associated data are discussed. The full range of DBMS needs is covered including acquiring, managing, storing, archiving, accessing and dissemination of data for an application. Existing bottlenecks in DBMS operations, expected developments in the field of remote sensing, communications, and computer science are discussed, and an overview of existing conditions and expected problems is presented. The requirements for a proposed spatial information system and characteristics of a comprehensive browse facility for earth observations applications are included.

  19. Site Environmental Report for 2010, Volumes 1 & 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskin, David; Bauters, Tim; Borglin, Ned

    2011-09-01

    LBNL is a multiprogram scientific facility operated by the UC for the DOE. LBNL’s research is directed toward the physical, biological, environmental, and computational sciences, in order to deliver scientific knowledge and discoveries pertinent to DOE’s missions. This annual Site Environmental Report covers activities conducted in CY 2010. The format and content of this report satisfy the requirements of DOE Order 231.1A, Environment, Safety, and Health Reporting,1 and the operating contract between UC and DOE

  20. First principles statistical mechanics of alloys and magnetism

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai

    Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.

  1. Neutron Radiography and Computed Tomography at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raine, Dudley A. III; Hubbard, Camden R.; Whaley, Paul M.

    1997-12-31

    The capability to perform neutron radiography and computed tomography is being developed at Oak Ridge National Laboratory. The facility will be located at the High Flux Isotope Reactor (HFIR), which has the highest steady state neutron flux of any reactor in the world. The Monte Carlo N-Particle transport code (MCNP), versions 4A and 4B, has been used extensively in the design phase of the facility to predict and optimize the operating characteristics, and to ensure the safety of personnel working in and around the blockhouse. Neutrons are quite penetrating in most engineering materials and can be useful to detect internalmore » flaws and features. Hydrogen atoms, such as in a hydrocarbon fuel, lubricant or a metal hydride, are relatively opaque to neutron transmission. Thus, neutron based tomography or radiography is ideal to image their presence. The source flux also provides unparalleled flexibility for future upgrades, including real time radiography where dynamic processes can be observed. A novel tomography detector has been designed using optical fibers and digital technology to provide a large dynamic range for reconstructions. Film radiography is also available for high resolution imaging applications. This paper summarizes the results of the design phase of this facility and the potential benefits to science and industry.« less

  2. The challenges of developing computational physics: the case of South Africa

    NASA Astrophysics Data System (ADS)

    Salagaram, T.; Chetty, N.

    2013-08-01

    Most modern scientific research problems are complex and interdisciplinary in nature. It is impossible to study such problems in detail without the use of computation in addition to theory and experiment. Although it is widely agreed that students should be introduced to computational methods at the undergraduate level, it remains a challenge to do this in a full traditional undergraduate curriculum. In this paper, we report on a survey that we conducted of undergraduate physics curricula in South Africa to determine the content and the approach taken in the teaching of computational physics. We also considered the pedagogy of computational physics at the postgraduate and research levels at various South African universities, research facilities and institutions. We conclude that the state of computational physics training in South Africa, especially at the undergraduate teaching level, is generally weak and needs to be given more attention at all universities. Failure to do so will impact negatively on the countrys capacity to grow its endeavours generally in the field of computational sciences, with negative impacts on research, and in commerce and industry.

  3. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  4. Management, Analysis, and Visualization of Experimental and Observational Data -- The Convergence of Data and Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Greenwald, Martin; Kleese van Dam, Kersten

    Scientific user facilities---particle accelerators, telescopes, colliders, supercomputers, light sources, sequencing facilities, and more---operated by the U.S. Department of Energy (DOE) Office of Science (SC) generate ever increasing volumes of data at unprecedented rates from experiments, observations, and simulations. At the same time there is a growing community of experimentalists that require real-time data analysis feedback, to enable them to steer their complex experimental instruments to optimized scientific outcomes and new discoveries. Recent efforts in DOE-SC have focused on articulating the data-centric challenges and opportunities facing these science communities. Key challenges include difficulties coping with data size, rate, and complexity inmore » the context of both real-time and post-experiment data analysis and interpretation. Solutions will require algorithmic and mathematical advances, as well as hardware and software infrastructures that adequately support data-intensive scientific workloads. This paper presents the summary findings of a workshop held by DOE-SC in September 2015, convened to identify the major challenges and the research that is needed to meet those challenges.« less

  5. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, S; Rotman, D; Schwegler, E

    The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less

  6. Microgravity Science Glovebox (MSG)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The Microgravity Science Glovebox is a facility for performing microgravity research in the areas of materials, combustion, fluids and biotechnology science. The facility occupies a full ISPR, consisting of: the ISPR rack and infrastructure for the rack, the glovebox core facility, data handling, rack stowage, outfitting equipment, and a video subsystem. MSG core facility provides the experiment developers a chamber with air filtering and recycling, up to two levels of containment, an airlock for transfer of payload equipment to/from the main volume, interface resources for the payload inside the core facility, resources inside the airlock, and storage drawers for MSG support equipment and consumables.

  7. Microgravity Science Glovebox (MSG) Space Science's Past, Present, and Future on the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie A.; Spearing, Scott F.; Jordan, Lee P.; McDaniel S. Greg

    2012-01-01

    The Microgravity Science Glovebox (MSG) is a double rack facility designed for microgravity investigation handling aboard the International Space Station (ISS). The unique design of the facility allows it to accommodate science and technology investigations in a "workbench" type environment. MSG facility provides an enclosed working area for investigation manipulation and observation in the ISS. Provides two levels of containment via physical barrier, negative pressure, and air filtration. The MSG team and facilities provide quick access to space for exploratory and National Lab type investigations to gain an understanding of the role of gravity in the physics associated research areas. The MSG is a very versatile and capable research facility on the ISS. The Microgravity Science Glovebox (MSG) on the International Space Station (ISS) has been used for a large body or research in material science, heat transfer, crystal growth, life sciences, smoke detection, combustion, plant growth, human health, and technology demonstration. MSG is an ideal platform for gravity-dependent phenomena related research. Moreover, the MSG provides engineers and scientists a platform for research in an environment similar to the one that spacecraft and crew members will actually experience during space travel and exploration. The MSG facility is ideally suited to provide quick, relatively inexpensive access to space for National Lab type investigations.

  8. The Terra Data Fusion Project: An Update

    NASA Astrophysics Data System (ADS)

    Di Girolamo, L.; Bansal, S.; Butler, M.; Fu, D.; Gao, Y.; Lee, H. J.; Liu, Y.; Lo, Y. L.; Raila, D.; Turner, K.; Towns, J.; Wang, S. W.; Yang, K.; Zhao, G.

    2017-12-01

    Terra is the flagship of NASA's Earth Observing System. Launched in 1999, Terra's five instruments continue to gather data that enable scientists to address fundamental Earth science questions. By design, the strength of the Terra mission has always been rooted in its five instruments and the ability to fuse the instrument data together for obtaining greater quality of information for Earth Science compared to individual instruments alone. As the data volume grows and the central Earth Science questions move towards problems requiring decadal-scale data records, the need for data fusion and the ability for scientists to perform large-scale analytics with long records have never been greater. The challenge is particularly acute for Terra, given its growing volume of data (> 1 petabyte), the storage of different instrument data at different archive centers, the different file formats and projection systems employed for different instrument data, and the inadequate cyberinfrastructure for scientists to access and process whole-mission fusion data (including Level 1 data). Sharing newly derived Terra products with the rest of the world also poses challenges. As such, the Terra Data Fusion Project aims to resolve two long-standing problems: 1) How do we efficiently generate and deliver Terra data fusion products? 2) How do we facilitate the use of Terra data fusion products by the community in generating new products and knowledge through national computing facilities, and disseminate these new products and knowledge through national data sharing services? Here, we will provide an update on significant progress made in addressing these problems by working with NASA and leveraging national facilities managed by the National Center for Supercomputing Applications (NCSA). The problems that we faced in deriving and delivering Terra L1B2 basic, reprojected and cloud-element fusion products, such as data transfer, data fusion, processing on different computer architectures, science, and sharing, will be presented with quantitative specifics. Results from several science-specific drivers for Terra fusion products will also be presented. We demonstrate that the Terra Data Fusion Project itself provides an excellent use-case for the community addressing Big Data and cyberinfrastructure problems.

  9. Apollo experience report: Real-time auxiliary computing facility development

    NASA Technical Reports Server (NTRS)

    Allday, C. E.

    1972-01-01

    The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.

  10. Laboratory Directed Research and Development Program FY 2008 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    editor, Todd C Hansen

    2009-02-23

    The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness. Berkeley Lab's research and the Laboratory Directed Research and Development (LDRD) program support DOE's Strategic Themes that are codified in DOE's 2006 Strategic Plan (DOE/CF-0010), with a primary focus on Scientific Discovery and Innovation. For that strategic theme, the Fiscal Year (FY) 2008 LDRD projects support each one of the three goals through multiple strategies described in the plan. In addition, LDRD efforts support the four goals of Energy Security, the two goals of Environmental Responsibility, and Nuclear Security (unclassified fundamental research that supports stockpile safety and nonproliferation programs). The LDRD program supports Office of Science strategic plans, including the 20-year Scientific Facilities Plan and the Office of Science Strategic Plan. The research also supports the strategic directions periodically under consideration and review by the Office of Science Program Offices, such as LDRD projects germane to new research facility concepts and new fundamental science directions. Berkeley Lab LDRD program also play an important role in leveraging DOE capabilities for national needs. The fundamental scientific research and development conducted in the program advances the skills and technologies of importance to our Work For Others (WFO) sponsors. Among many directions, these include a broad range of health-related science and technology of interest to the National Institutes of Health, breast cancer and accelerator research supported by the Department of Defense, detector technologies that should be useful to the Department of Homeland Security, and particle detection that will be valuable to the Environmental Protection Agency. The Berkeley Lab Laboratory Directed Research and Development Program FY2008 report is compiled from annual reports submitted by principal investigators following the close of the fiscal year. This report describes the supported projects and summarizes their accomplishments. It constitutes a part of the LDRD program planning and documentation process that includes an annual planning cycle, project selection, implementation, and review.« less

  11. Neutron flux characterization of californium-252 Neutron Research Facility at the University of Texas - Pan American by nuclear analytical technique

    NASA Astrophysics Data System (ADS)

    Wahid, Kareem; Sanchez, Patrick; Hannan, Mohammad

    2014-03-01

    In the field of nuclear science, neutron flux is an intrinsic property of nuclear reaction facilities that is the basis for experimental irradiation calculations and analysis. In the Rio Grande Valley (Texas), the UTPA Neutron Research Facility (NRF) is currently the only neutron facility available for experimental research purposes. The facility is comprised of a 20-microgram californium-252 neutron source surrounded by a shielding cascade containing different irradiation cavities. Thermal and fast neutron flux values for the UTPA NRF have yet to be fully investigated and may be of particular interest to biomedical studies in low neutron dose applications. Though a variety of techniques exist for the characterization of neutron flux, neutron activation analysis (NAA) of metal and nonmetal foils is a commonly utilized experimental method because of its detection sensitivity and availability. The aim of our current investigation is to employ foil activation in the determination of neutron flux values for the UTPA NSRF for further research purposes. Neutron spectrum unfolding of the acquired experimental data via specialized software and subsequent comparison for consistency with computational models lends confidence to the results.

  12. Development and Use of a Virtual NMR Facility

    NASA Astrophysics Data System (ADS)

    Keating, Kelly A.; Myers, James D.; Pelton, Jeffrey G.; Bair, Raymond A.; Wemmer, David E.; Ellis, Paul D.

    2000-03-01

    We have developed a "virtual NMR facility" (VNMRF) to enhance access to the NMR spectrometers in Pacific Northwest National Laboratory's Environmental Molecular Sciences Laboratory (EMSL). We use the term virtual facility to describe a real NMR facility made accessible via the Internet. The VNMRF combines secure remote operation of the EMSL's NMR spectrometers over the Internet with real-time videoconferencing, remotely controlled laboratory cameras, real-time computer display sharing, a Web-based electronic laboratory notebook, and other capabilities. Remote VNMRF users can see and converse with EMSL researchers, directly and securely control the EMSL spectrometers, and collaboratively analyze results. A customized Electronic Laboratory Notebook allows interactive Web-based access to group notes, experimental parameters, proposed molecular structures, and other aspects of a research project. This paper describes our experience developing a VNMRF and details the specific capabilities available through the EMSL VNMRF. We show how the VNMRF has evolved during a test project and present an evaluation of its impact in the EMSL and its potential as a model for other scientific facilities. All Collaboratory software used in the VNMRF is freely available from http://www.emsl.pnl.gov:2080/docs/collab.

  13. IYA Outreach Plans for Appalachian State University's Observatories

    NASA Astrophysics Data System (ADS)

    Caton, Daniel B.; Pollock, J. T.; Saken, J. M.

    2009-01-01

    Appalachian State University will provide a variety of observing opportunities for the public during the International Year of Astronomy. These will be focused at both the campus GoTo Telescope Facility used by Introductory Astronomy students and the research facilities at our Dark Sky Observatory. The campus facility is composed of a rooftop deck with a roll-off roof housing fifteen Celestron C11 telescopes. During astronomy lab class meetings these telescopes are used either in situ or remotely by computer control from the adjacent classroom. For the IYA we will host the public for regular observing sessions at these telescopes. The research facility features a 32-inch DFM Engineering telescope with its dome attached to the Cline Visitor Center. The Visitor Center is still under construction and we anticipate its completion for a spring opening during IYA. The CVC will provide areas for educational outreach displays and a view of the telescope control room. Visitors will view celestial objects directly at the eyepiece. We are grateful for the support of the National Science Foundation, through grant number DUE-0536287, which provided instrumentation for the GoTO facility, and to J. Donald Cline for support of the Visitor Center.

  14. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  15. Literature Related to Planning, Design and Construction of Science Facilities.

    ERIC Educational Resources Information Center

    National Science Foundation, Washington, DC.

    A list of the articles and papers in the science facilities collection of the Architectural Services Staff is presented. It has been prepared to serve as a bibliography that may be useful to persons searching for data on the design of science facilities, and as a means of informing such persons of the material available for reference in the…

  16. Microgravity Science Glovebox (MSG) Space Sciences's Past, Present, and Future on the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie A.; Jordan, Lee P.

    2012-01-01

    The Microgravity Science Glovebox (MSG) is a double rack facility designed for microgravity investigation handling aboard the International Space Station (ISS). The unique design of the facility allows it to accommodate science and technology investigations in a "workbench" type environment. MSG facility provides an enclosed working area for investigation manipulation and observation in the ISS. Provides two levels of containment via physical barrier, negative pressure, and air filtration. The MSG team and facilities provide quick access to space for exploratory and National Lab type investigations to gain an understanding of the role of gravity in the physics associated research areas.

  17. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  18. Know Your Discipline: Teaching the Philosophy of Computer Science

    ERIC Educational Resources Information Center

    Tedre, Matti

    2007-01-01

    The diversity and interdisciplinarity of computer science and the multiplicity of its uses in other sciences make it hard to define computer science and to prescribe how computer science should be carried out. The diversity of computer science also causes friction between computer scientists from different branches. Computer science curricula, as…

  19. Astronomic Telescope Facility: Preliminary systems definition study report. Volume 2: Technical description

    NASA Technical Reports Server (NTRS)

    Sobeck, Charlie (Editor)

    1987-01-01

    The Astrometric Telescope Facility (AFT) is to be an earth-orbiting facility designed specifically to measure the change in relative position of stars. The primary science investigation for the facility will be the search for planets and planetary systems outside the solar system. In addition the facility will support astrophysics investigations dealing with the location or motions of stars. The science objective and facility capabilities for astrophysics investigations are discussed.

  20. Opening Remarks: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2007-09-01

    Good morning. Welcome to Boston, the home of the Red Sox, Celtics and Bruins, baked beans, tea parties, Robert Parker, and SciDAC 2007. A year ago I stood before you to share the legacy of the first SciDAC program and identify the challenges that we must address on the road to petascale computing—a road E E Cummins described as `. . . never traveled, gladly beyond any experience.' Today, I want to explore the preparations for the rapidly approaching extreme scale (X-scale) generation. These preparations are the first step propelling us along the road of burgeoning scientific discovery enabled by the application of X- scale computing. We look to petascale computing and beyond to open up a world of discovery that cuts across scientific fields and leads us to a greater understanding of not only our world, but our universe. As part of the President's America Competitiveness Initiative, the ASCR Office has been preparing a ten year vision for computing. As part of this planning the LBNL together with ORNL and ANL hosted three town hall meetings on Simulation and Modeling at the Exascale for Energy, Ecological Sustainability and Global Security (E3). The proposed E3 initiative is organized around four programmatic themes: Engaging our top scientists, engineers, computer scientists and applied mathematicians; investing in pioneering large-scale science; developing scalable analysis algorithms, and storage architectures to accelerate discovery; and accelerating the build-out and future development of the DOE open computing facilities. It is clear that we have only just started down the path to extreme scale computing. Plan to attend Thursday's session on the out-briefing and discussion of these meetings. The road to the petascale has been at best rocky. In FY07, the continuing resolution provided 12% less money for Advanced Scientific Computing than either the President, the Senate, or the House. As a consequence, many of you had to absorb a no cost extension for your SciDAC work. I am pleased that the President's FY08 budget restores the funding for SciDAC. Quoting from Advanced Scientific Computing Research description in the House Energy and Water Development Appropriations Bill for FY08, "Perhaps no other area of research at the Department is so critical to sustaining U.S. leadership in science and technology, revolutionizing the way science is done and improving research productivity." As a society we need to revolutionize our approaches to energy, environmental and global security challenges. As we go forward along the road to the X-scale generation, the use of computation will continue to be a critical tool along with theory and experiment in understanding the behavior of the fundamental components of nature as well as for fundamental discovery and exploration of the behavior of complex systems. The foundation to overcome these societal challenges will build from the experiences and knowledge gained as you, members of our SciDAC research teams, work together to attack problems at the tera- and peta- scale. If SciDAC is viewed as an experiment for revolutionizing scientific methodology, then a strategic goal of ASCR program must be to broaden the intellectual base prepared to address the challenges of the new X-scale generation of computing. We must focus our computational science experiences gained over the past five years on the opportunities introduced with extreme scale computing. Our facilities are on a path to provide the resources needed to undertake the first part of our journey. Using the newly upgraded 119 teraflop Cray XT system at the Leadership Computing Facility, SciDAC research teams have in three days performed a 100-year study of the time evolution of the atmospheric CO2 concentration originating from the land surface. The simulation of the El Nino/Southern Oscillation which was part of this study has been characterized as `the most impressive new result in ten years' gained new insight into the behavior of superheated ionic gas in the ITER reactor as a result of an AORSA run on 22,500 processors that achieved over 87 trillion calculations per second (87 teraflops) which is 74% of the system's theoretical peak. Tomorrow, Argonne and IBM will announce that the first IBM Blue Gene/P, a 100 teraflop system, will be shipped to the Argonne Leadership Computing Facility later this fiscal year. By the end of FY2007 ASCR high performance and leadership computing resources will include the 114 teraflop IBM Blue Gene/P; a 102 teraflop Cray XT4 at NERSC and a 119 teraflop Cray XT system at Oak Ridge. Before ringing in the New Year, Oak Ridge will upgrade to 250 teraflops with the replacement of the dual core processors with quad core processors and Argonne will upgrade to between 250-500 teraflops, and next year, a petascale Cray Baker system is scheduled for delivery at Oak Ridge. The multidisciplinary teams in our SciDAC Centers for Enabling Technologies and our SciDAC Institutes must continue to work with our Scientific Application teams to overcome the barriers that prevent effective use of these new systems. These challenges include: the need for new algorithms as well as operating system and runtime software and tools which scale to parallel systems composed of hundreds of thousands processors; program development environments and tools which scale effectively and provide ease of use for developers and scientific end users; and visualization and data management systems that support moving, storing, analyzing, manipulating and visualizing multi-petabytes of scientific data and objects. The SciDAC Centers, located primarily at our DOE national laboratories will take the lead in ensuring that critical computer science and applied mathematics issues are addressed in a timely and comprehensive fashion and to address issues associated with research software lifecycle. In contrast, the SciDAC Institutes, which are university-led centers of excellence, will have more flexibility to pursue new research topics through a range of research collaborations. The Institutes will also work to broaden the intellectual and researcher base—conducting short courses and summer schools to take advantage of new high performance computing capabilities. The SciDAC Outreach Center at Lawrence Berkeley National Laboratory complements the outreach efforts of the SciDAC Institutes. The Outreach Center is our clearinghouse for SciDAC activities and resources and will communicate with the high performance computing community in part to understand their needs for workshops, summer schools and institutes. SciDAC is not ASCR's only effort to broaden the computational science community needed to meet the challenges of the new X-scale generation. I hope that you were able to attend the Computational Science Graduate Fellowship poster session last night. ASCR developed the fellowship in 1991 to meet the nation's growing need for scientists and technology professionals with advanced computer skills. CSGF, now jointly funded between ASCR and NNSA, is more than a traditional academic fellowship. It has provided more than 200 of the best and brightest graduate students with guidance, support and community in preparing them as computational scientists. Today CSGF alumni are bringing their diverse top-level skills and knowledge to research teams at DOE laboratories and in industries such as Proctor and Gamble, Lockheed Martin and Intel. At universities they are working to train the next generation of computational scientists. To build on this success, we intend to develop a wholly new Early Career Principal Investigator's (ECPI) program. Our objective is to stimulate academic research in scientific areas within ASCR's purview especially among faculty in early stages of their academic careers. Last February, we lost Ken Kennedy, one of the leading lights of our community. As we move forward into the extreme computing generation, his vision and insight will be greatly missed. In memorial to Ken Kennedy, we shall designate the ECPI grants to beginning faculty in Computer Science as the Ken Kennedy Fellowship. Watch the ASCR website for more information about ECPI and other early career programs in the computational sciences. We look to you, our scientists, researchers, and visionaries to take X-scale computing and use it to explode scientific discovery in your fields. We at SciDAC will work to ensure that this tool is the sharpest and most precise and efficient instrument to carve away the unknown and reveal the most exciting secrets and stimulating scientific discoveries of our time. The partnership between research and computing is the marriage that will spur greater discovery, and as Spencer said to Susan in Robert Parker's novel, `Sudden Mischief', `We stick together long enough, and we may get as smart as hell'. Michael Strayer

  1. Science and technology in the stockpile stewardship program, S & TR reprints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storm, E

    This document reports on these topics: Computer Simulations in Support of National Security; Enhanced Surveillance of Aging Weapons; A New Precision Cutting Tool: The Femtosecond Laser; Superlasers as a Tool of Stockpile Stewardship; Nova Laser Experiments and Stockpile Stewardship; Transforming Explosive Art into Science; Better Flash Radiography Using the FXR; Preserving Nuclear Weapons Information; Site 300Õs New Contained Firing Facility; The Linear Electric Motor: Instability at 1,000 gÕs; A Powerful New Tool to Detect Clandestine Nuclear Tests; High Explosives in Stockpile Surveillance Indicate Constancy; Addressing a Cold War Legacy with a New Way to Produce TATB; JumpinÕ Jupiter! Metallic Hydrogen;more » Keeping the Nuclear Stockpile Safe, Secure, and Reliable; The Multibeam FabryÐPerot Velocimeter: Efficient Measurements of High Velocities; Theory and Modeling in Material Science; The Diamond Anvil Cell; Gamma-Ray Imaging Spectrometry; X-Ray Lasers and High-Density Plasma« less

  2. Microgravity Particle Research on the Space Station

    NASA Technical Reports Server (NTRS)

    Squyres, Steven W. (Editor); Mckay, Christopher P. (Editor); Schwartz, Deborah E. (Editor)

    1987-01-01

    Science questions that could be addressed by a Space Station Microgravity Particle Research Facility for studying small suspended particles were discussed. Characteristics of such a facility were determined. Disciplines covered include astrophysics and the solar nebula, planetary science, atmospheric science, exobiology and life science, and physics and chemistry.

  3. ISCR FY2005 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D E; McGraw, J R

    2006-02-02

    Large-scale scientific computation and all of the disciplines that support and help validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of simulation as a fundamental tool of scientific and engineering research is underscored in the President's Information Technology Advisory Committee (PITAC) June 2005 finding that ''computational science has become critical to scientific leadership, economic competitiveness, and nationalmore » security''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed, most notably the molecular dynamics simulation that sustained more than 100 Teraflop/s and won the 2005 Gordon Bell Prize. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use in an efficient manner. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to the core missions of LLNL than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In FY 2005, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for both brief and extended visits with the aim of encouraging long-term academic research agendas that address LLNL research priorities. Through these collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''hands and feet'' that carry those advances into the Laboratory and incorporate them into practice. ISCR research participants are integrated into LLNL's Computing Applications and Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other four institutes of the URP, the ISCR navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort. The pages of this annual report summarize the activities of the faculty members, postdoctoral researchers, students, and guests from industry and other laboratories who participated in LLNL's computational mission under the auspices of the ISCR during FY 2005.« less

  4. Development and applications of nondestructive evaluation at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Whitaker, Ann F.

    1990-01-01

    A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.

  5. The European Plate Observing System (EPOS) Services for Solid Earth Science

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Atakan, Kuvvet; Pedersen, Helle; Consortium, Epos

    2016-04-01

    The European Plate Observing System (EPOS) aims to create a pan-European infrastructure for solid Earth science to support a safe and sustainable society. The main vision of the European Plate Observing System (EPOS) is to address the three basic challenges in Earth Sciences: (i) unravelling the Earth's deformational processes which are part of the Earth system evolution in time, (ii) understanding the geo-hazards and their implications to society, and (iii) contributing to the safe and sustainable use of geo-resources. The mission of EPOS is to monitor and understand the dynamic and complex Earth system by relying on new e-science opportunities and integrating diverse and advanced Research Infrastructures in Europe for solid Earth Science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. EPOS will improve our ability to better manage the use of the subsurface of the Earth. Through integration of data, models and facilities EPOS will allow the Earth Science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and to human welfare. EPOS has now started its Implementation Phase (EPOS-IP). One of the main challenges during the implementation phase is the integration of multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. These include Data, Data-products, Services and Software (DDSS), from seismology, near fault observatories, geodetic observations, volcano observations, satellite observations, geomagnetic observations, as well as data from various anthropogenic hazard episodes, geological information and modelling. In addition, transnational access to multi-scale laboratories and geo-energy test-beds for low-carbon energy will be provided. TCS DDSS will be integrated into Integrated Core Services (ICS), a platform that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. This requires dedicated tasks for interactions with the various TCS-WPs, as well as the various distributed ICS (ICS-Ds), such as High Performance Computing (HPC) facilities, large scale data storage facilities, complex processing and visualization tools etc. Computational Earth Science (CES) services are identified as a transversal activity and is planned to be harmonized and provided within the ICS. The EPOS Thematic Services will rely in part on strong and sustainable participation by national organisations and international consortia. While this distributed architecture will contribute to ensure pan European involvement in EPOS, it also raises specific challenges: ensuring similar granularity of services, compatibility of technical solutions, homogeneous legal agreements and sustainable financial engagement from the partner institutions and organisations. EPOS is engaging actions to address all of these issues during 2016-2017, after which the services will enter a final validation phase by the EPOS Board of Governmental Representatives.

  6. Fine grained event processing on HPCs with the ATLAS Yoda system

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre

    2015-12-01

    High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.

  7. Nuclear Science in the Undergraduate Curriculum: The New Nuclear Science Facility at San Jose State University.

    ERIC Educational Resources Information Center

    Ling, A. Campbell

    1979-01-01

    The following aspects of the radiochemistry program at San Jose State University in California are described: the undergraduate program in radiation chemistry, the new nuclear science facility, and academic programs in nuclear science for students not attending San Jose State University. (BT)

  8. International Space Station -- Fluids and Combustion Facility

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Fluids and Combustion Facility (FCF) is a modular, multi-user facility to accommodate microgravity science experiments on board Destiny, the U.S. Laboratory Module for the International Space Station (ISS). The FCF will be a permanet facility aboard the ISS, and will be capable of accommodating up to ten science investigations per year. It will support the NASA Science and Technology Research Plans for the International Space Station (ISS) which require sustained systematic research of the effects of reduced gravity in the areas of fluid physics and combustion science. From left to right are the Combustion Integrated Rack, the Shared Rack, and the Fluids Integrated Rack. The FCF is being developed by the Microgravity Science Division (MSD) at the NASA Glenn Research Center. (Photo Credit: NASA/Marshall Space Flight Center)

  9. Kennedy Space Center Launch and Landing Support

    NASA Technical Reports Server (NTRS)

    Wahlberg, Jennifer

    2010-01-01

    The presentations describes Kennedy Space Center (KSC) payload processing, facilities and capabilities, and research development and life science experience. Topics include launch site processing, payload processing, key launch site processing roles, leveraging KSC experience, Space Station Processing Facility and capabilities, Baseline Data Collection Facility, Space Life Sciences Laboratory and capabilities, research payload development, International Space Station research flight hardware, KSC flight payload history, and KSC life science expertise.

  10. A framework for multi-stakeholder decision-making and ...

    EPA Pesticide Factsheets

    We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study

  11. The European Plate Observing System (EPOS): Integrating Thematic Services for Solid Earth Science

    NASA Astrophysics Data System (ADS)

    Atakan, Kuvvet; Bailo, Daniele; Consortium, Epos

    2016-04-01

    The mission of EPOS is to monitor and understand the dynamic and complex Earth system by relying on new e-science opportunities and integrating diverse and advanced Research Infrastructures in Europe for solid Earth Science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. Through integration of data, models and facilities EPOS will allow the Earth Science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and to human welfare. EPOS, during its Implementation Phase (EPOS-IP), will integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. These include Data, Data-products, Services and Software (DDSS), from seismology, near fault observatories, geodetic observations, volcano observations, satellite observations, geomagnetic observations, as well as data from various anthropogenic hazard episodes, geological information and modelling. In addition, transnational access to multi-scale laboratories and geo-energy test-beds for low-carbon energy will be provided. TCS DDSS will be integrated into Integrated Core Services (ICS), a platform that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. This requires dedicated tasks for interactions with the various TCS-WPs, as well as the various distributed ICS (ICS-Ds), such as High Performance Computing (HPC) facilities, large scale data storage facilities, complex processing and visualization tools etc. Computational Earth Science (CES) services are identified as a transversal activity and is planned to be harmonized and provided within the ICS. Currently a comprehensive requirements and use cases elicitation process is started through interactions with the ten different Thematic Core Service work packages. The results of this will be used to harmonize the DDSS elements and prepare for interoperability across the various disciplines. For this purpose a dedicated workshop is planned where the representatives of all the TCS communities will jointly discuss and agree upon the harmonization process. The technical integration of the DDSS elements to a metadata structure adopting CERIF (Common European Research Information Format) standards will start after the harmonization process is completed. Various levels of maturity in the handling and availability of TCS specific DDSS elements among the different TCS groups, is one of the most challenging aspects of this integration. For this reason a roadmap for integration is being prepared where most mature DDSS elements will be implemented during the next 2 years after a community driven testing and validation process. Integration of the remaining DDSS elements will be a continuously evolving process in the coming years.

  12. A survey of advancements in nucleic acid-based logic gates and computing for applications in biotechnology and biomedicine.

    PubMed

    Wu, Cuichen; Wan, Shuo; Hou, Weijia; Zhang, Liqin; Xu, Jiehua; Cui, Cheng; Wang, Yanyue; Hu, Jun; Tan, Weihong

    2015-03-04

    Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gate and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications.

  13. A Survey of Advancements in Nucleic Acid-based Logic Gates and Computing for Applications in Biotechnology and biomedicine

    PubMed Central

    Wu, Cuichen; Wan, Shuo; Hou, Weijia; Zhang, Liqin; Xu, Jiehua; Cui, Cheng; Wang, Yanyue; Hu, Jun

    2015-01-01

    Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gating and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications. PMID:25597946

  14. KSC-97PC1404

    NASA Image and Video Library

    1997-09-23

    Technicians at the SPACEHAB Payload Processing Facility in Cape Canaveral prepare a Russian replacement computer for stowage aboard the Space Shuttle Atlantis shortly before the scheduled launch of Mission STS-86, slated to be the seventh docking of the Space Shuttle with the Russian Space Station Mir. The last-minute cargo addition requested by the Russians will be mounted on the aft bulkhead of the SPACEHAB Double Module, which is being used as a pressurized cargo container for science/logistical equipment and supplies that will be exchanged between Atlantis and the Mir. Using the Module Vertical Access Kit (MVAC), technicians will be lowered inside the module to install the computer for flight. Liftoff of STS-86 is scheduled Sept. 25 at 10:34 p.m. from Launch Pad 39A

  15. Supercomputing Sheds Light on the Dark Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Heitmann, Katrin

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less

  16. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  17. GET21: Geoinformatics Training and Education for the 21st Century Geoscience Workforce

    NASA Astrophysics Data System (ADS)

    Baru, C.; Allison, L.; Fox, P.; Keane, C.; Keller, R.; Richard, S.

    2012-04-01

    The integration of advanced information technologies (referred to as cyberinfrastructure) into scientific research and education creates a synergistic situation. On the one hand, science begins to move at the speed of information technology, with science applications having to move rapidly to keep apace with the latest innovations in hardware and software. On the other hand, information technology moves at the pace of science, requiring rapid prototyping and rapid development of software and systems to serve the immediate needs of the application. The 21st century geoscience workforce must be adept at both sides of this equation to be able to make the best use of the available cyber-tools for their science and education endeavors. To reach different segments of the broad geosciences community, an education program in geoinformatics must be multi-faceted, ranging from areas dealing with modeling, computational science, and high performance computing, to those dealing with data collection, data science, and data-intensive computing. Based on our experience in geoinformatics and data science education, we propose a multi-pronged approach with a number of different components, including summer institutes typically aimed at graduate students, postdocs and researchers; graduate and undergraduate curriculum development in geoinformatics; development of online course materials to facilitate asynchronous learning, especially for geoscience professionals in the field; provision of internship at geoinformatics-related facilities for graduate students, so that they can observe and participate in geoinformatics "in action"; creation of online communities and networks to facilitate planned as well as serendipitous collaborations and for linking users with experts in the different areas of geoscience and geoinformatics. We will describe some of our experiences and the lessons learned over the years from the Cyberinfrastructure Summer Institute for Geoscientists (CSIG), which is a 1-week institute that has been held each summer (August) at the San Diego Supercomputer Center, University of California, San Diego, since 2005. We will also discuss these opportunities for GET21 and geoinformatics education in the context of the newly launched EarthCube initiative at the US National Science Foundation.

  18. M4AST - A Tool for Asteroid Modelling

    NASA Astrophysics Data System (ADS)

    Birlan, Mirel; Popescu, Marcel; Irimiea, Lucian; Binzel, Richard

    2016-10-01

    M4AST (Modelling for asteroids) is an online tool devoted to the analysis and interpretation of reflection spectra of asteroids in the visible and near-infrared spectral intervals. It consists into a spectral database of individual objects and a set of routines for analysis which address scientific aspects such as: taxonomy, curve matching with laboratory spectra, space weathering models, and mineralogical diagnosis. Spectral data were obtained using groundbased facilities; part of these data are precompiled from the literature[1].The database is composed by permanent and temporary files. Each permanent file contains a header and two or three columns (wavelength, spectral reflectance, and the error on spectral reflectance). Temporary files can be uploaded anonymously, and are purged for the property of submitted data. The computing routines are organized in order to accomplish several scientific objectives: visualize spectra, compute the asteroid taxonomic class, compare an asteroid spectrum with similar spectra of meteorites, and computing mineralogical parameters. One facility of using the Virtual Observatory protocols was also developed.A new version of the service was released in June 2016. This new release of M4AST contains a database and facilities to model more than 6,000 spectra of asteroids. A new web-interface was designed. This development allows new functionalities into a user-friendly environment. A bridge system of access and exploiting the database SMASS-MIT (http://smass.mit.edu) allows the treatment and analysis of these data in the framework of M4AST environment.Reference:[1] M. Popescu, M. Birlan, and D.A. Nedelcu, "Modeling of asteroids: M4AST," Astronomy & Astrophysics 544, EDP Sciences, pp. A130, 2012.

  19. Materials Science Research Hardware for Application on the International Space Station: an Overview of Typical Hardware Requirements and Features

    NASA Technical Reports Server (NTRS)

    Schaefer, D. A.; Cobb, S.; Fiske, M. R.; Srinivas, R.

    2000-01-01

    NASA's Marshall Space Flight Center (MSFC) is the lead center for Materials Science Microgravity Research. The Materials Science Research Facility (MSRF) is a key development effort underway at MSFC. The MSRF will be the primary facility for microgravity materials science research on board the International Space Station (ISS) and will implement the NASA Materials Science Microgravity Research Program. It will operate in the U.S. Laboratory Module and support U. S. Microgravity Materials Science Investigations. This facility is being designed to maintain the momentum of the U.S. role in microgravity materials science and support NASA's Human Exploration and Development of Space (HEDS) Enterprise goals and objectives for Materials Science. The MSRF as currently envisioned will consist of three Materials Science Research Racks (MSRR), which will be deployed to the International Space Station (ISS) in phases, Each rack is being designed to accommodate various Experiment Modules, which comprise processing facilities for peer selected Materials Science experiments. Phased deployment will enable early opportunities for the U.S. and International Partners, and support the timely incorporation of technology updates to the Experiment Modules and sensor devices.

  20. Science Facilities Design Guidelines.

    ERIC Educational Resources Information Center

    Maryland State Dept. of Education, Baltimore.

    These guidelines, presented in five chapters, propose a framework to support the planning, designing, constructing, and renovating of school science facilities. Some program issues to be considered in the articulation of a science program include environmental concerns, interdisciplinary approaches, space flexibility, and electronic…

  1. Active Oxygen Vacancy Site for Methanol Synthesis from CO2 Hydrogenation on In2O3(110): A DFT Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Jingyun; Liu, Changjun; Mei, Donghai

    2013-06-03

    Methanol synthesis from CO2 hydrogenation on the defective In2O3(110) surface with surface oxygen vacancies has been investigated using periodic density functional theory calculations. The relative stabilities of six possible surface oxygen vacancies numbered from Ov1 to Ov6 on the perfect In2O3(110) surface were examined. The calculated oxygen vacancy formation energies show that the D1 surface with the Ov1 defective site is the most thermodynamically favorable while the D4 surface with the Ov4 defective site is the least stable. Two different methanol synthesis routes from CO2 hydrogenation over both D1 and D4 surfaces were studied and the D4 surface was foundmore » to be more favorable for CO2 activation and hydrogenation. On the D4 surface, one of the O atoms of the CO2 molecule fills in the Ov4 site upon adsorption. Hydrogenation of CO2 to HCOO on the D4 surface is both thermodynamically and kinetically favorable. Further hydrogenation of HCOO involves both forming the C-H bond and breaking the C-O bond, resulting in H2CO and hydroxyl. The HCOO hydrogenation is slightly endothermic with an activation barrier of 0.57 eV. A high barrier of 1.14 eV for the hydrogenation of H2CO to H3CO indicates that this step is the rate-limiting step in the methanol synthesis on the defective In2O3(110) surface. We gratefully acknowledge the supports from the National Natural Science Foundation of China (#20990223) and from US Department of Energy, Basic Energy Science program (DE-FG02-05ER46231). D. Mei was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. The computations were performed in part using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at Pacific Northwest National Laboratory in Richland, Washington. PNNL is a multiprogram national laboratory operated for DOE by Battelle.« less

  2. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  3. Site Characterization Report (Building 202). Volume 2. Appendicies A-H.

    DTIC Science & Technology

    1996-04-01

    Bionetics,Groundwater and Wells, Environmental Science and Engineering, Inc., Installation Assessment of ERADCOM Activities, Environmental Science and...Engineering, Inc., Plan for the Assessment of Contamination at Woodbridge Research Facility, Environmental Science and Engineering, Inc., Remedial...Action Plan for the Woodbridge Research Facility PCB Disposal Site, Environmental Science and Engineering, Inc., Remedial Investigation and

  4. Space infrared telescope facility project

    NASA Technical Reports Server (NTRS)

    Cruikshank, Dale P.

    1988-01-01

    The functions undertaken during this reporting period were: to inform the planetary science community of the progress and status of the Space Infrared Telescope Facility (SIRTF) Project; to solicit input from the planetary science community on needs and requirements of planetary science in the use of SIRTF at such time that it becomes an operational facility; and a white paper was prepared on the use of the SIRTF for solar system studies.

  5. Integrated Computational Materials Engineering for Magnesium in Automotive Body Applications

    NASA Astrophysics Data System (ADS)

    Allison, John E.; Liu, Baicheng; Boyle, Kevin P.; Hector, Lou; McCune, Robert

    This paper provides an overview and progress report for an international collaborative project which aims to develop an ICME infrastructure for magnesium for use in automotive body applications. Quantitative processing-micro structure-property relationships are being developed for extruded Mg alloys, sheet-formed Mg alloys and high pressure die cast Mg alloys. These relationships are captured in computational models which are then linked with manufacturing process simulation and used to provide constitutive models for component performance analysis. The long term goal is to capture this information in efficient computational models and in a web-centered knowledge base. The work is being conducted at leading universities, national labs and industrial research facilities in the US, China and Canada. This project is sponsored by the U.S. Department of Energy, the U.S. Automotive Materials Partnership (USAMP), Chinese Ministry of Science and Technology (MOST) and Natural Resources Canada (NRCan).

  6. NASA Astrophysics Data System (ADS)

    Knosp, B.; Neely, S.; Zimdars, P.; Mills, B.; Vance, N.

    2007-12-01

    The Microwave Limb Sounder (MLS) Science Computing Facility (SCF) stores over 50 terabytes of data, has over 240 computer processing hosts, and 64 users from around the world. These resources are spread over three primary geographical locations - the Jet Propulsion Laboratory (JPL), Raytheon RIS, and New Mexico Institute of Mining and Technology (NMT). A need for a grid network system was identified and defined to solve the problem of users competing for finite, and increasingly scarce, MLS SCF computing resources. Using Sun's Grid Engine software, a grid network was successfully created in a development environment that connected the JPL and Raytheon sites, established master and slave hosts, and demonstrated that transfer queues for jobs can work among multiple clusters in the same grid network. This poster will first describe MLS SCF resources and the lessons that were learned in the design and development phase of this project. It will then go on to discuss the test environment and plans for deployment by highlighting benchmarks and user experiences.

  7. The European HST Science Data Archive. [and Data Management Facility (DMF)

    NASA Technical Reports Server (NTRS)

    Pasian, F.; Pirenne, B.; Albrecht, R.; Russo, G.

    1993-01-01

    The paper describes the European HST Science Data Archive. Particular attention is given to the flow from the HST spacecraft to the Science Data Archive at the Space Telescope European Coordinating Facility (ST-ECF); the archiving system at the ST-ECF, including the hardware and software system structure; the operations at the ST-ECF and differences with the Data Management Facility; and the current developments. A diagram of the logical structure and data flow of the system managing the European HST Science Data Archive is included.

  8. White paper on science operations

    NASA Technical Reports Server (NTRS)

    Schreier, Ethan J.

    1991-01-01

    Major changes are taking place in the way astronomy gets done. There are continuing advances in observational capabilities across the frequency spectrum, involving both ground-based and space-based facilities. There is also very rapid evolution of relevant computing and data management technologies. However, although the new technologies are filtering in to the astronomy community, and astronomers are looking at their computing needs in new ways, there is little coordination or coherent policy. Furthermore, although there is great awareness of the evolving technologies in the arena of operations, much of the existing operations infrastructure is ill-suited to take advantage of them. Astronomy, especially space astronomy, has often been at the cutting edge of computer use in data reduction and image analysis, but has been somewhat removed from advanced applications in operations, which have tended to be implemented by industry rather than by the end-user scientists. The purpose of this paper is threefold. First, we briefly review the background and general status of astronomy-related computing. Second, we make recommendations in three areas: data analysis; operations (directed primarily to NASA-related activities); and issues of management and policy, believing that these must be addressed to enable technological progress and to proceed through the next decade. Finally, we recommend specific NASA-related work as part of the Astrotech-21 plans, to enable better science operations in the operations of the Great Observatories and in the lunar outpost era.

  9. Phytozome Comparative Plant Genomics Portal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodstein, David; Batra, Sajeev; Carlson, Joseph

    2014-09-09

    The Dept. of Energy Joint Genome Institute is a genomics user facility supporting DOE mission science in the areas of Bioenergy, Carbon Cycling, and Biogeochemistry. The Plant Program at the JGI applies genomic, analytical, computational and informatics platforms and methods to: 1. Understand and accelerate the improvement (domestication) of bioenergy crops 2. Characterize and moderate plant response to climate change 3. Use comparative genomics to identify constrained elements and infer gene function 4. Build high quality genomic resource platforms of JGI Plant Flagship genomes for functional and experimental work 5. Expand functional genomic resources for Plant Flagship genomes

  10. KSC-07pd1239

    NASA Image and Video Library

    2007-05-17

    KENNEDY SPACE CENTER, FLA. -- In the Astrotech Space Operations facility, Orbital Science technicians install a computer chip on the Dawn spacecraft. The silicon chip holds the names of more than 360,000 space enthusiasts worldwide who signed up to participate in a virtual voyage to the asteroid belt and is about the size of an American five-cent coin. Dawn's mission is to explore two of the asteroid belt's most intriguing and dissimilar occupants: asteroid Vesta and the dwarf planet Ceres. Dawn is scheduled to launch June 30 from Launch Complex 17-B. Photo credit: NASA/Jim Grossmann

  11. Automation of internal library operations in academic health sciences libraries: a state of the art report.

    PubMed Central

    Grefsheim, S F; Larson, R H; Bader, S A; Matheson, N W

    1982-01-01

    A survey of automated records management in the United States and Canada was developed to identify existing on-line library systems and technical expertise. Follow-up interviews were conducted with ten libraries. Tables compare the features and availability of four main frame and four minicomputer systems. Results showed: a trend toward vendor-supplied systems; little coordination of efforts among schools; current system developments generally on a universitywide basis; and the importance of having the cooperation of campus computer facilities to the success of automation efforts. PMID:7066571

  12. Microbes to Biomes at Berkeley Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-10-28

    Microbes are the Earth's most abundant and diverse form of life. Berkeley Lab's Microbes to Biomes initiative -- which will take advantage of research expertise at the Joint Genome Institute, Advanced Light Source, Molecular Foundry, and the new computational science facility -- is designed to explore and reveal the interactions of microbes with one another and with their environment. Microbes power our planet’s biogeochemical cycles, provide nutrients to our plants, purify our water and are integral components in keeping the human body free of disease and may hold the key to the Earth’s future.

  13. Laboratory Directed Research and Development FY2010 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, K J

    2011-03-22

    A premier applied-science laboratory, Lawrence Livermore National Laboratory (LLNL) has at its core a primary national security mission - to ensure the safety, security, and reliability of the nation's nuclear weapons stockpile without nuclear testing, and to prevent and counter the spread and use of weapons of mass destruction: nuclear, chemical, and biological. The Laboratory uses the scientific and engineering expertise and facilities developed for its primary mission to pursue advanced technologies to meet other important national security needs - homeland defense, military operations, and missile defense, for example - that evolve in response to emerging threats. For broader nationalmore » needs, LLNL executes programs in energy security, climate change and long-term energy needs, environmental assessment and management, bioscience and technology to improve human health, and for breakthroughs in fundamental science and technology. With this multidisciplinary expertise, the Laboratory serves as a science and technology resource to the U.S. government and as a partner with industry and academia. This annual report discusses the following topics: (1) Advanced Sensors and Instrumentation; (2) Biological Sciences; (3) Chemistry; (4) Earth and Space Sciences; (5) Energy Supply and Use; (6) Engineering and Manufacturing Processes; (7) Materials Science and Technology; Mathematics and Computing Science; (8) Nuclear Science and Engineering; and (9) Physics.« less

  14. Discover the Cosmos - Bringing Cutting Edge Science to Schools across Europe

    NASA Astrophysics Data System (ADS)

    Doran, Rosa

    2015-03-01

    The fast growing number of science data repositories is opening enormous possibilities to scientists all over the world. The emergence of citizen science projects is engaging in science discovery a large number of citizens globally. Astronomical research is now a possibility to anyone having a computer and some form of data access. This opens a very interesting and strategic possibility to engage large audiences in the making and understanding of science. On another perspective it would be only natural to imagine that soon enough data mining will be an active part of the academic path of university or even secondary schools students. The possibility is very exciting but the road not very promising. Even in the most developed nations, where all schools are equipped with modern ICT facilities the use of such possibilities is still a very rare episode. The Galileo Teacher Training Program GTTP, a legacy of IYA2009, is participating in some of the most emblematic projects funded by the European Commission and targeting modern tools, resources and methodologies for science teaching. One of this projects is Discover the Cosmos which is aiming to target this issue by empowering educators with the necessary skills to embark on this innovative path: teaching science while doing science.

  15. YALINA facility a sub-critical Accelerator- Driven System (ADS) for nuclear energy research facility description and an overview of the research program (1997-2008).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Smith, D. L.; Nuclear Engineering Division

    2010-04-28

    The YALINA facility is a zero-power, sub-critical assembly driven by a conventional neutron generator. It was conceived, constructed, and put into operation at the Radiation Physics and Chemistry Problems Institute of the National Academy of Sciences of Belarus located in Minsk-Sosny, Belarus. This facility was conceived for the purpose of investigating the static and dynamic neutronics properties of accelerator driven sub-critical systems, and to serve as a neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinide nuclei. This report provides a detailed description of this facility and documents the progress of research carried outmore » there during a period of approximately a decade since the facility was conceived and built until the end of 2008. During its history of development and operation to date (1997-2008), the YALINA facility has hosted several foreign groups that worked with the resident staff as collaborators. The participation of Argonne National Laboratory in the YALINA research programs commenced in 2005. For obvious reasons, special emphasis is placed in this report on the work at YALINA facility that has involved Argonne's participation. Attention is given here to the experimental program at YALINA facility as well as to analytical investigations aimed at validating codes and computational procedures and at providing a better understanding of the physics and operational behavior of the YALINA facility in particular, and ADS systems in general, during the period 1997-2008.« less

  16. Principles for Integrating Mars Analog Science, Operations, and Technology Research

    NASA Technical Reports Server (NTRS)

    Clancey, William J.

    2003-01-01

    During the Apollo program, the scientific community and NASA used terrestrial analog sites for understanding planetary features and for training astronauts to be scientists. Human factors studies (Harrison, Clearwater, & McKay 1991; Stuster 1996) have focused on the effects of isolation in extreme environments. More recently, with the advent of wireless computing, we have prototyped advanced EVA technologies for navigation, scheduling, and science data logging (Clancey 2002b; Clancey et al., in press). Combining these interests in a single expedition enables tremendous synergy and authenticity, as pioneered by Pascal Lee's Haughton-Mars Project (Lee 2001; Clancey 2000a) and the Mars Society s research stations on a crater rim on Devon Island in the High Canadian Arctic (Clancey 2000b; 2001b) and the Morrison Formation of southeast Utah (Clancey 2002a). Based on this experience, the following principles are proposed for conducting an integrated science, operations, and technology research program at analog sites: 1) Authentic work; 2) PI-based projects; 3) Unencumbered baseline studies; 4) Closed simulations; and 5) Observation and documentation. Following these principles, we have been integrating field science, operations research, and technology development at analog sites on Devon Island and in Utah over the past five years. Analytic methods include work practice simulation (Clancey 2002c; Sierhuis et a]., 2000a;b), by which the interaction of human behavior, facilities, geography, tools, and procedures are formalized in computer models. These models are then converted into the runtime EVA system we call mobile agents (Clancey 2002b; Clancey et al., in press). Furthermore, we have found that the Apollo Lunar Surface Journal (Jones, 1999) provides a vast repository or understanding astronaut and CapCom interactions, serving as a baseline for Mars operations and quickly highlighting opportunities for computer automation (Clancey, in press).

  17. Twenty-Five Year Site Plan FY2013 - FY2037

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, William H.

    2012-07-12

    Los Alamos National Laboratory (the Laboratory) is the nation's premier national security science laboratory. Its mission is to develop and apply science and technology to ensure the safety, security, and reliability of the United States (U.S.) nuclear stockpile; reduce the threat of weapons of mass destruction, proliferation, and terrorism; and solve national problems in defense, energy, and the environment. The fiscal year (FY) 2013-2037 Twenty-Five Year Site Plan (TYSP) is a vital component for planning to meet the National Nuclear Security Administration (NNSA) commitment to ensure the U.S. has a safe, secure, and reliable nuclear deterrent. The Laboratory also usesmore » the TYSP as an integrated planning tool to guide development of an efficient and responsive infrastructure that effectively supports the Laboratory's missions and workforce. Emphasizing the Laboratory's core capabilities, this TYSP reflects the Laboratory's role as a prominent contributor to NNSA missions through its programs and campaigns. The Laboratory is aligned with Nuclear Security Enterprise (NSE) modernization activities outlined in the NNSA Strategic Plan (May 2011) which include: (1) ensuring laboratory plutonium space effectively supports pit manufacturing and enterprise-wide special nuclear materials consolidation; (2) constructing the Chemistry and Metallurgy Research Replacement Nuclear Facility (CMRR-NF); (3) establishing shared user facilities to more cost effectively manage high-value, experimental, computational and production capabilities; and (4) modernizing enduring facilities while reducing the excess facility footprint. Th is TYSP is viewed by the Laboratory as a vital planning tool to develop an effi cient and responsive infrastructure. Long range facility and infrastructure development planning are critical to assure sustainment and modernization. Out-year re-investment is essential for sustaining existing facilities, and will be re-evaluated on an annual basis. At the same time, major modernization projects will require new line-item funding. This document is, in essence, a roadmap that defines a path forward for the Laboratory to modernize, streamline, consolidate, and sustain its infrastructure to meet its national security mission.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less

  19. An economic and financial exploratory

    NASA Astrophysics Data System (ADS)

    Cincotti, S.; Sornette, D.; Treleaven, P.; Battiston, S.; Caldarelli, G.; Hommes, C.; Kirman, A.

    2012-11-01

    This paper describes the vision of a European Exploratory for economics and finance using an interdisciplinary consortium of economists, natural scientists, computer scientists and engineers, who will combine their expertise to address the enormous challenges of the 21st century. This Academic Public facility is intended for economic modelling, investigating all aspects of risk and stability, improving financial technology, and evaluating proposed regulatory and taxation changes. The European Exploratory for economics and finance will be constituted as a network of infrastructure, observatories, data repositories, services and facilities and will foster the creation of a new cross-disciplinary research community of social scientists, complexity scientists and computing (ICT) scientists to collaborate in investigating major issues in economics and finance. It is also considered a cradle for training and collaboration with the private sector to spur spin-offs and job creations in Europe in the finance and economic sectors. The Exploratory will allow Social Scientists and Regulators as well as Policy Makers and the private sector to conduct realistic investigations with real economic, financial and social data. The Exploratory will (i) continuously monitor and evaluate the status of the economies of countries in their various components, (ii) use, extend and develop a large variety of methods including data mining, process mining, computational and artificial intelligence and every other computer and complex science techniques coupled with economic theory and econometric, and (iii) provide the framework and infrastructure to perform what-if analysis, scenario evaluations and computational, laboratory, field and web experiments to inform decision makers and help develop innovative policy, market and regulation designs.

  20. Microgravity

    NASA Image and Video Library

    2001-06-05

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. Key elements are labeled in other images (0101754, 0101830, and TBD).

  1. Microgravity

    NASA Image and Video Library

    2001-06-05

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. Key elements are labeled in other images (0101754, 0101829, 0101830).

  2. Microgravity

    NASA Image and Video Library

    2001-06-05

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. A larger image is available without labels (No. 0101755).

  3. Atomistic Simulations of High-intensity XFEL Pulses on Diffractive Imaging of Nano-sized System Dynamics

    NASA Astrophysics Data System (ADS)

    Ho, Phay; Knight, Christopher; Bostedt, Christoph; Young, Linda; Tegze, Miklos; Faigel, Gyula

    2016-05-01

    We have developed a large-scale atomistic computational method based on a combined Monte Carlo and Molecular Dynamics (MC/MD) method to simulate XFEL-induced radiation damage dynamics of complex materials. The MD algorithm is used to propagate the trajectories of electrons, ions and atoms forward in time and the quantum nature of interactions with an XFEL pulse is accounted for by a MC method to calculate probabilities of electronic transitions. Our code has good scalability with MPI/OpenMP parallelization, and it has been run on Mira, a petascale system at the Argonne Leardership Computing Facility, with particle number >50 million. Using this code, we have examined the impact of high-intensity 8-keV XFEL pulses on the x-ray diffraction patterns of argon clusters. The obtained patterns show strong pulse parameter dependence, providing evidence of significant lattice rearrangement and diffuse scattering. Real-space electronic reconstruction was performed using phase retrieval methods. We found that the structure of the argon cluster can be recovered with atomic resolution even in the presence of considerable radiation damage. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division.

  4. Factors influencing exemplary science teachers' levels of computer use

    NASA Astrophysics Data System (ADS)

    Hakverdi, Meral

    This study examines exemplary science teachers' use of technology in science instruction, factors influencing their level of computer use, their level of knowledge/skills in using specific computer applications for science instruction, their use of computer-related applications/tools during their instruction, and their students' use of computer applications/tools in or for their science class. After a relevant review of the literature certain variables were selected for analysis. These variables included personal self-efficacy in teaching with computers, outcome expectancy, pupil-control ideology, level of computer use, age, gender, teaching experience, personal computer use, professional computer use and science teachers' level of knowledge/skills in using specific computer applications for science instruction. The sample for this study includes middle and high school science teachers who received the Presidential Award for Excellence in Science Teaching Award (sponsored by the White House and the National Science Foundation) between the years 1997 and 2003 from all 50 states and U.S. territories. Award-winning science teachers were contacted about the survey via e-mail or letter with an enclosed return envelope. Of the 334 award-winning science teachers, usable responses were received from 92 science teachers, which made a response rate of 27.5%. Analysis of the survey responses indicated that exemplary science teachers have a variety of knowledge/skills in using computer related applications/tools. The most commonly used computer applications/tools are information retrieval via the Internet, presentation tools, online communication, digital cameras, and data collection probes. Results of the study revealed that students' use of technology in their science classroom is highly correlated with the frequency of their science teachers' use of computer applications/tools. The results of the multiple regression analysis revealed that personal self-efficacy related to the exemplary science teachers' level of computer use suggesting that computer use is dependent on perceived abilities at using computers. The teachers' use of computer-related applications/tools during class, and their personal self-efficacy, age, and gender are highly related with their level of knowledge/skills in using specific computer applications for science instruction. The teachers' level of knowledge/skills in using specific computer applications for science instruction and gender related to their use of computer-related applications/tools during class and the students' use of computer-related applications/tools in or for their science class. In conclusion, exemplary science teachers need assistance in learning and using computer-related applications/tool in their science class.

  5. Acid/base equilibria in clusters and their role in proton exchange membranes: Computational insight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glezakou, Vanda A; Dupuis, Michel; Mundy, Christopher J

    2007-10-24

    We describe molecular orbital theory and ab initio molecular dynamics studies of acid/base equilibria of clusters AH:(H 2O) n↔A -:H +(H 2O) n in low hydration regime (n = 1-4), where AH is a model of perfluorinated sulfonic acids, RSO 3H (R = CF 3CF 2), encountered in polymeric electrolyte membranes of fuel cells. Free energy calculations on the neutral and ion pair structures for n = 3 indicate that the two configurations are close in energy and are accessible in the fluctuation dynamics of proton transport. For n = 1,2 the only relevant configuration is the neutral form. Thismore » was verified through ab initio metadynamics simulations. These findings suggest that bases are directly involved in the proton transport at low hydration levels. In addition, the gas phase proton affinity of the model sulfonic acid RSO 3H was found to be comparable to the proton affinity of water. Thus, protonated acids can also play a role in proton transport under low hydration conditions and under high concentration of protons. This work was supported by the Division of Chemical Science, Office of Basic Energy Sciences, US Department of Energy (DOE under Contract DE-AC05-76RL)1830. Computations were performed on computers of the Molecular Interactions and Transformations (MI&T) group and MSCF facility of EMSL, sponsored by US DOE and OBER located at PNNL. This work was benefited from resource of the National Energy Research Scientific Computing Centre, supported by the Office of Science of the US DOE, under Contract No. DE-AC03-76SF00098.« less

  6. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  7. Academic Computing Facilities and Services in Higher Education--A Survey.

    ERIC Educational Resources Information Center

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  8. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meisner, Robert; McCoy, Michel; Archer, Bill

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less

  9. Argonne National Laboratory Annual Report of Laboratory Directed Research and Development program activities FY 2011.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Office of The Director)

    As a national laboratory Argonne concentrates on scientific and technological challenges that can only be addressed through a sustained, interdisciplinary focus at a national scale. Argonne's eight major initiatives, as enumerated in its strategic plan, are Hard X-ray Sciences, Leadership Computing, Materials and Molecular Design and Discovery, Energy Storage, Alternative Energy and Efficiency, Nuclear Energy, Biological and Environmental Systems, and National Security. The purposes of Argonne's Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel technical concepts, enhance the Laboratory's research and development (R and D) capabilities, and pursue its strategic goals. projects are selectedmore » from proposals for creative and innovative R and D studies that require advance exploration before they are considered to be sufficiently developed to obtain support through normal programmatic channels. Among the aims of the projects supported by the LDRD Program are the following: establishment of engineering proof of principle, assessment of design feasibility for prospective facilities, development of instrumentation or computational methods or systems, and discoveries in fundamental science and exploratory development.« less

  10. NAIF Toolkit - Extended

    NASA Technical Reports Server (NTRS)

    Acton, Charles H., Jr.; Bachman, Nathaniel J.; Semenov, Boris V.; Wright, Edward D.

    2010-01-01

    The Navigation Ancillary Infor ma tion Facility (NAIF) at JPL, acting under the direction of NASA s Office of Space Science, has built a data system named SPICE (Spacecraft Planet Instrument Cmatrix Events) to assist scientists in planning and interpreting scientific observations (see figure). SPICE provides geometric and some other ancillary information needed to recover the full value of science instrument data, including correlation of individual instrument data sets with data from other instruments on the same or other spacecraft. This data system is used to produce space mission observation geometry data sets known as SPICE kernels. It is also used to read SPICE kernels and to compute derived quantities such as positions, orientations, lighting angles, etc. The SPICE toolkit consists of a subroutine/ function library, executable programs (both large applications and simple utilities that focus on kernel management), and simple examples of using SPICE toolkit subroutines. This software is very accurate, thoroughly tested, and portable to all computers. It is extremely stable and reusable on all missions. Since the previous version, three significant capabilities have been added: Interactive Data Language (IDL) interface, MATLAB interface, and a geometric event finder subsystem.

  11. SPOKES: An end-to-end simulation facility for spectroscopic cosmological surveys

    DOE PAGES

    Nord, B.; Amara, A.; Refregier, A.; ...

    2016-03-03

    The nature of dark matter, dark energy and large-scale gravity pose some of the most pressing questions in cosmology today. These fundamental questions require highly precise measurements, and a number of wide-field spectroscopic survey instruments are being designed to meet this requirement. A key component in these experiments is the development of a simulation tool to forecast science performance, define requirement flow-downs, optimize implementation, demonstrate feasibility, and prepare for exploitation. We present SPOKES (SPectrOscopic KEn Simulation), an end-to-end simulation facility for spectroscopic cosmological surveys designed to address this challenge. SPOKES is based on an integrated infrastructure, modular function organization, coherentmore » data handling and fast data access. These key features allow reproducibility of pipeline runs, enable ease of use and provide flexibility to update functions within the pipeline. The cyclic nature of the pipeline offers the possibility to make the science output an efficient measure for design optimization and feasibility testing. We present the architecture, first science, and computational performance results of the simulation pipeline. The framework is general, but for the benchmark tests, we use the Dark Energy Spectrometer (DESpec), one of the early concepts for the upcoming project, the Dark Energy Spectroscopic Instrument (DESI). As a result, we discuss how the SPOKES framework enables a rigorous process to optimize and exploit spectroscopic survey experiments in order to derive high-precision cosmological measurements optimally.« less

  12. 75 FR 39664 - Grant of Authority For Subzone Status Materials Science Technology, Inc. (Specialty Elastomers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-12

    ... Status Materials Science Technology, Inc. (Specialty Elastomers and Fire Retardant Chemicals) Conroe... specialty elastomer manufacturing and distribution facility of Materials Science Technology, Inc., located... and distribution of specialty elastomers and fire retardant chemicals at the facility of Materials...

  13. GeoBrain Computational Cyber-laboratory for Earth Science Studies

    NASA Astrophysics Data System (ADS)

    Deng, M.; di, L.

    2009-12-01

    Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.

  14. Programmable multi-zone furnace for microgravity research

    NASA Technical Reports Server (NTRS)

    Rosenthal, Bruce N.; Krolikowski, Cathryn R.

    1991-01-01

    In order to provide new furnace technology to accommodate microgravity research studies and commercial applications in material processes, research has been initiated on the development of the Programmable-Multi-zone Furnace (PMZF). The PMZF is described as a multi-user materials processing furnace facility that is composed of thirty or more heater elements in series on a muffle tube or in a stacked ring-type configuration and independently controlled by a computer. One of the aims of the PMZF project is to allow furnace thermal gradient profiles to be reconfigured without physical modification of the hardware by creating the capability of reconfiguring thermal profiles in response to investigators' requests. The future location of the PMZF facility is discussed; the preliminary science survey results and preliminary conceptual designs for the PMZF are presented; and a review of multi-zone furnace technology is given.

  15. Life Sciences Centrifuge Facility assessment

    NASA Technical Reports Server (NTRS)

    Benson, Robert H.

    1994-01-01

    This report provides an assessment of the status of the Centrifuge Facility being developed by ARC for flight on the International Space Station Alpha. The assessment includes technical status, schedules, budgets, project management, performance of facility relative to science requirements, and identifies risks and issues that need to be considered in future development activities.

  16. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  17. Dehydration pathways of 1-propanol on HZSM-5 in the presence and absence of water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhi, Yuchun; Shi, Hui; Mu, Linyu

    The Brønsted acid-catalyzed gas-phase dehydration of 1-propanol (0.075-4 kPa) was studied on zeolite H-MFI (Si/Al = 26, containing minimal amounts of extraframework Al moieties) in the absence and presence of co-fed water (0-2.5 kPa) at 413-443 K. It is shown that propene can be formed from monomeric and dimeric adsorbed 1-propanol. The stronger adsorption of 1-propanol relative to water indicates that the reduced dehydration rates in the presence of water are not a consequence of the competitive adsorption between 1-propanol and water. Instead, the deleterious effect is related to the different extents of stabilization of adsorbed intermediates and the relevantmore » elimination/substitution transition states by water. Water stabilizes the adsorbed 1-propanol monomer significantly more than the elimination transition state, leading to a higher activation barrier and a greater entropy gain for the rate-limiting step, which eventually leads to propene. In a similar manner, an excess of 1-propanol stabilizes the adsorbed state of 1-propanol more than the elimination transition state. In comparison with the monomer-mediated pathway, adsorbed dimer and the relevant transition states for propene and ether formation are similarly, while less effectively, stabilized by intrazeolite water molecules. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences, and was performed in part using the Molecular Sciences Computing Facility (MSCF) in the William R. Wiley Environmental Molecular Sciences Laboratory, a DOE national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located and the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for DOE.« less

  18. Science minister unveils reforms to facilities council

    NASA Astrophysics Data System (ADS)

    Banks, Michael

    2010-04-01

    The UK's science minister Lord Dray son has announced a series of measures to prevent the Science and Technology Facilities Council (STFC) from being dogged by further financial crises. They include a plan for the STFC's budget for large facilities, such as the Diamond synchrotron and the ISIS neutron-scattering lab, to be allocated and managed separately from its budget for grants. Drayson was forced to review the STFC after the council announced last December that the UK would have to pull out of 25 international science projects because of a £40m shortfall in funding.

  19. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Geoffrey; Jha, Shantenu; Ramakrishnan, Lavanya

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), weremore » conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.« less

  20. Microgravity

    NASA Image and Video Library

    1998-09-30

    The Electrostatic Levitator (ESL) Facility established at Marshall Space Flight Center (MSFC) supports NASA's Microgravity Materials Science Research Program. NASA materials science investigations include ground-based, flight definition and flight projects. Flight definition projects, with demanding science concept review schedules, receive highest priority for scheduling experiment time in the Electrostatic Levitator (ESL) Facility.

  1. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  2. KSC-2011-6731

    NASA Image and Video Library

    2011-07-14

    CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, the trailer transporting the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission arrives at the RTG storage facility (RTGF). The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder

  3. KSC-2011-6732

    NASA Image and Video Library

    2011-07-14

    CAPE CANAVERAL, Fla. -- At the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida, preparations are under way to offload the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission from the MMRTG trailer. The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder

  4. KSC-2011-6744

    NASA Image and Video Library

    2011-07-14

    CAPE CANAVERAL, Fla. -- The multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is uncovered in the high bay of the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG was returned to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder

  5. KSC-2011-6730

    NASA Image and Video Library

    2011-07-14

    CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission, secured inside the MMRTG trailer, makes its way between the Payload Hazardous Servicing Facility (PHSF) and the RTG storage facility. The MMRTG is being moved following a fit check on MSL's Curiosity rover in the PHSF. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder

  6. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  7. The Pisgah Astronomical Research Institute

    NASA Astrophysics Data System (ADS)

    Cline, J. Donald; Castelaz, M.

    2009-01-01

    Pisgah Astronomical Research Institute is a not-for-profit foundation located at a former NASA tracking station in the Pisgah National Forest in western North Carolina. PARI is celebrating its 10th year. During its ten years, PARI has developed and implemented innovative science education programs. The science education programs are hands-on experimentally based, mixing disciplines in astronomy, computer science, earth and atmospheric science, engineering, and multimedia. The basic tools for the educational programs include a 4.6-m radio telescope accessible via the Internet, a StarLab planetarium, the Astronomical Photographic Data Archive (APDA), a distributed computing online environment to classify stars called SCOPE, and remotely accessible optical telescopes. The PARI 200 acre campus has a 4.6-m, a 12-m and two 26-m radio telescopes, optical solar telescopes, a Polaris monitoring telescope, 0.4-m and 0.35-m optical research telescopes, and earth and atmospheric science instruments. PARI is also the home of APDA, a repository for astronomical photographic plate collections which will eventually be digitized and made available online. PARI has collaborated with visiting scientists who have developed their research with PARI telescopes and lab facilities. Current experiments include: the Dedicated Interferometer for Rapid Variability (Dennison et al. 2007, Astronomical and Astrophysical Transactions, 26, 557); the Plate Boundary Observatory operated by UNAVCO; the Clemson University Fabry-Perot Interferometers (Meriwether 2008, Journal of Geophysical Research, submitted) measuring high velocity winds and temperatures in the Thermosphere, and the Western Carolina University - PARI variable star program. Current status of the education and research programs and instruments will be presented. Also, development plans will be reviewed. Development plans include the greening of PARI with the installation of solar panels to power the optical telescopes, a new distance learning center, and enhancements to the atmospheric and earth science suite of instrumentation.

  8. Nuclear-Recoil Differential Cross Sections for the Two Photon Double Ionization of Helium

    NASA Astrophysics Data System (ADS)

    Abdel Naby, Shahin; Ciappina, M. F.; Lee, T. G.; Pindzola, M. S.; Colgan, J.

    2013-05-01

    In support of the reaction microscope measurements at the free-electron laser facility at Hamburg (FLASH), we use the time-dependent close-coupling method (TDCC) to calculate fully differential nuclear-recoil cross sections for the two-photon double ionization of He at photon energy of 44 eV. The total cross section for the double ionization is in good agreement with previous calculations. The nuclear-recoil distribution is in good agreement with the experimental measurements. In contrast to the single-photon double ionization, maximum nuclear recoil triple differential cross section is obtained at small nuclear momenta. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California and the National Institute for Computational Sciences in Knoxville, Tennessee.

  9. Physics through the 1990s: Scientific interfaces and technological applications

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The volume examines the scientific interfaces and technological applications of physics. Twelve areas are dealt with: biological physics-biophysics, the brain, and theoretical biology; the physics-chemistry interface-instrumentation, surfaces, neutron and synchrotron radiation, polymers, organic electronic materials; materials science; geophysics-tectonics, the atmosphere and oceans, planets, drilling and seismic exploration, and remote sensing; computational physics-complex systems and applications in basic research; mathematics-field theory and chaos; microelectronics-integrated circuits, miniaturization, future trends; optical information technologies-fiber optics and photonics; instrumentation; physics applications to energy needs and the environment; national security-devices, weapons, and arms control; medical physics-radiology, ultrasonics, MNR, and photonics. An executive summary and many chapters contain recommendations regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs.

  10. A Look at the Impact of High-End Computing Technologies on NASA Missions

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart

    2012-01-01

    From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.

  11. Local Aqueous Solvation Structure Around Ca2+ During Ca2+---Cl– Pair Formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Marcel D.; Mundy, Christopher J.

    2016-03-03

    The molecular details of single ion solvation around Ca2+ and ion-pairing of Ca2--Cl- are investigated using ab initio molecular dynamics. The use of empirical dispersion corrections to the BLYP functional are investigated by comparison to experimentally available extended X-ray absorption fine structure (EXAFS) measurements, which probes the first solvation shell in great detail. Besides finding differences in the free-energy for both ion-pairing and the coordination number of ion solvation between the quantum and classical descriptions of interaction, there were important differences found between dispersion corrected and uncorrected density functional theory (DFT). Specifically, we show significantly different free-energy landscapes for bothmore » coordination number of Ca2+ and its ion-pairing with Cl- depending on the DFT simulation protocol. Our findings produce a self-consistent treatment of short-range solvent response to the ion and the intermediate to long-range collective response of the electrostatics of the ion-ion interaction to produce a detailed picture of ion-pairing that is consistent with experiment. MDB is supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative at Pacific Northwest National Laboratory. It was conducted under the Laboratory Directed Research and Development Program at PNNL, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. CJM acknowledges support from US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Additional computing resources were generously allocated by PNNL's Institutional Computing program. The authors thank Prof. Tom Beck for discussions regarding QCT, and Drs. Greg Schenter and Shawn Kathmann for insightful comments.« less

  12. Big Data over a 100G network at Fermilab

    DOE PAGES

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; ...

    2014-06-11

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less

  13. Big Data over a 100G network at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less

  14. Reusable Rack Interface Controller Common Software for Various Science Research Racks on the International Space Station

    NASA Technical Reports Server (NTRS)

    Lu, George C.

    2003-01-01

    The purpose of the EXPRESS (Expedite the PRocessing of Experiments to Space Station) rack project is to provide a set of predefined interfaces for scientific payloads which allow rapid integration into a payload rack on International Space Station (ISS). VxWorks' was selected as the operating system for the rack and payload resource controller, primarily based on the proliferation of VME (Versa Module Eurocard) products. These products provide needed flexibility for future hardware upgrades to meet everchanging science research rack configuration requirements. On the International Space Station, there are multiple science research rack configurations, including: 1) Human Research Facility (HRF); 2) EXPRESS ARIS (Active Rack Isolation System); 3) WORF (Window Observational Research Facility); and 4) HHR (Habitat Holding Rack). The RIC (Rack Interface Controller) connects payloads to the ISS bus architecture for data transfer between the payload and ground control. The RIC is a general purpose embedded computer which supports multiple communication protocols, including fiber optic communication buses, Ethernet buses, EIA-422, Mil-Std-1553 buses, SMPTE (Society Motion Picture Television Engineers)-170M video, and audio interfaces to payloads and the ISS. As a cost saving and software reliability strategy, the Boeing Payload Software Organization developed reusable common software where appropriate. These reusable modules included a set of low-level driver software interfaces to 1553B. RS232, RS422, Ethernet buses, HRDL (High Rate Data Link), video switch functionality, telemetry processing, and executive software hosted on the FUC computer. These drivers formed the basis for software development of the HRF, EXPRESS, EXPRESS ARIS, WORF, and HHR RIC executable modules. The reusable RIC common software has provided extensive benefits, including: 1) Significant reduction in development flow time; 2) Minimal rework and maintenance; 3) Improved reliability; and 4) Overall reduction in software life cycle cost. Due to the limited number of crew hours available on ISS for science research, operational efficiency is a critical customer concern. The current method of upgrading RIC software is a time consuming process; thus, an improved methodology for uploading RIC software is currently under evaluation.

  15. KENNEDY SPACE CENTER, FLA. - The Space Life Sciences Lab (SLSL), formerly known as the Space Experiment Research and Processing Laboratory (SERPL), is nearing completion. The new lab is a state-of-the-art facility being built for ISS biotechnology research. Developed as a partnership between NASA-KSC and the State of Florida, NASA’s life sciences contractor will be the primary tenant of the facility, leasing space to conduct flight experiment processing and NASA-sponsored research. About 20 percent of the facility will be available for use by Florida’s university researchers through the Florida Space Research Institute.

    NASA Image and Video Library

    2003-09-10

    KENNEDY SPACE CENTER, FLA. - The Space Life Sciences Lab (SLSL), formerly known as the Space Experiment Research and Processing Laboratory (SERPL), is nearing completion. The new lab is a state-of-the-art facility being built for ISS biotechnology research. Developed as a partnership between NASA-KSC and the State of Florida, NASA’s life sciences contractor will be the primary tenant of the facility, leasing space to conduct flight experiment processing and NASA-sponsored research. About 20 percent of the facility will be available for use by Florida’s university researchers through the Florida Space Research Institute.

  16. Microgravity research in NASA ground-based facilities

    NASA Technical Reports Server (NTRS)

    Lekan, Jack

    1989-01-01

    An overview of reduced gravity research performed in NASA ground-based facilities sponsored by the Microgravity Science and Applications Program of the NASA Office of Space Science and Applications is presented. A brief description and summary of the operations and capabilities of each of these facilities along with an overview of the historical usage of them is included. The goals and program elements of the Microgravity Science and Applications programs are described and the specific programs that utilize the low gravity facilities are identified. Results from two particular investigations in combustion (flame spread over solid fuels) and fluid physics (gas-liquid flows at microgravity conditions) are presented.

  17. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    NASA Astrophysics Data System (ADS)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the success of the workshop. Further information on ACAT 2011 can be found at http://acat2011.cern.ch Dr Liliana Teodorescu Brunel University ACATgroup The PDF also contains details of the workshop's committees and sponsors.

  18. Reference earth orbital research and applications investigations (blue book). Volume 8: Life sciences

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The functional program element for the life sciences facilities to operate aboard manned space stations is presented. The life sciences investigations will consist of the following subjects: (1) medical research, (2) vertebrate research, (3) plant research, (4) cells and tissue research, (5) invertebrate research, (6) life support and protection, and (7) man-system integration. The equipment required to provide the desired functional capability for the research facilities is defined. The goals and objectives of each research facility are described.

  19. Gas and water recycling system for IOC vivarium experiments

    NASA Technical Reports Server (NTRS)

    Nitta, K.; Otsubo, K.

    1986-01-01

    Water and gas recycling units designed as one of the common experiment support system for the life science experiment facilities used in the Japanese Experiment Module are discussed. These units will save transportation and operation costs for the life science experiments in the space station. These units are also designed to have interfaces so simple that the connection to another life science experiment facilities such as the Research Animal Holding Facility developed by the Rockheed Missiles and Space Company can be easily done with small modification.

  20. Space Life Sciences Lab

    NASA Image and Video Library

    2003-10-09

    The Space Life Sciences Lab (SLSL), formerly known as the Space Experiment Research and Processing Laboratory (SERPL), is a state-of-the-art facility built for ISS biotechnology research. Developed as a partnership between NASA-KSC and the State of Florida, NASA’s life sciences contractor is the primary tenant of the facility, leasing space to conduct flight experiment processing and NASA-sponsored research. About 20 percent of the facility will be available for use by Florida’s university researchers through the Florida Space Research Institute.

  1. Gender differences in the use of computers, programming, and peer interactions in computer science classrooms

    NASA Astrophysics Data System (ADS)

    Stoilescu, Dorian; Egodawatte, Gunawardena

    2010-12-01

    Research shows that female and male students in undergraduate computer science programs view computer culture differently. Female students are interested more in the use of computers than in doing programming, whereas male students see computer science mainly as a programming activity. The overall purpose of our research was not to find new definitions for computer science culture but to see how male and female students see themselves involved in computer science practices, how they see computer science as a successful career, and what they like and dislike about current computer science practices. The study took place in a mid-sized university in Ontario. Sixteen students and two instructors were interviewed to get their views. We found that male and female views are different on computer use, programming, and the pattern of student interactions. Female and male students did not have any major issues in using computers. In computing programming, female students were not so involved in computing activities whereas male students were heavily involved. As for the opinions about successful computer science professionals, both female and male students emphasized hard working, detailed oriented approaches, and enjoying playing with computers. The myth of the geek as a typical profile of successful computer science students was not found to be true.

  2. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  3. Flying a College on the Computer. The Use of the Computer in Planning Buildings.

    ERIC Educational Resources Information Center

    Saint Louis Community Coll., MO.

    Upon establishment of the St. Louis Junior College District, it was decided to make use of computer si"ulation facilities of a nearby aero-space contractor to develop a master schedule for facility planning purposes. Projected enrollments and course offerings were programmed with idealized student-teacher ratios to project facility needs. In…

  4. Microgravity

    NASA Image and Video Library

    2000-01-31

    The Fluids and Combustion Facility (FCF) is a modular, multi-user facility to accommodate microgravity science experiments on board Destiny, the U.S. Laboratory Module for the International Space Station (ISS). The FCF will be a permanet facility aboard the ISS, and will be capable of accommodating up to ten science investigations per year. It will support the NASA Science and Technology Research Plans for the International Space Station (ISS) which require sustained systematic research of the effects of reduced gravity in the areas of fluid physics and combustion science. From left to right are the Combustion Integrated Rack, the Shared Rack, and the Fluids Integrated Rack. The FCF is being developed by the Microgravity Science Division (MSD) at the NASA Glenn Research Center. (Photo Credit: NASA/Marshall Space Flight Center)

  5. Computer Based Training: Field Deployable Trainer and Shared Virtual Reality

    NASA Technical Reports Server (NTRS)

    Mullen, Terence J.

    1997-01-01

    Astronaut training has traditionally been conducted at specific sites with specialized facilities. Because of its size and nature the training equipment is generally not portable. Efforts are now under way to develop training tools that can be taken to remote locations, including into orbit. Two of these efforts are the Field Deployable Trainer and Shared Virtual Reality projects. Field Deployable Trainer NASA has used the recent shuttle mission by astronaut Shannon Lucid to the Russian space station, Mir, as an opportunity to develop and test a prototype of an on-orbit computer training system. A laptop computer with a customized user interface, a set of specially prepared CD's, and video tapes were taken to the Mir by Ms. Lucid. Based upon the feedback following the launch of the Lucid flight, our team prepared materials for the next Mir visitor. Astronaut John Blaha will fly on NASA/MIR Long Duration Mission 3, set to launch in mid September. He will take with him a customized hard disk drive and a package of compact disks containing training videos, references and maps. The FDT team continues to explore and develop new and innovative ways to conduct offsite astronaut training using personal computers. Shared Virtual Reality Training NASA's Space Flight Training Division has been investigating the use of virtual reality environments for astronaut training. Recent efforts have focused on activities requiring interaction by two or more people, called shared VR. Dr. Bowen Loftin, from the University of Houston, directs a virtual reality laboratory that conducts much of the NASA sponsored research. I worked on a project involving the development of a virtual environment that can be used to train astronauts and others to operate a science unit called a Biological Technology Facility (BTF). Facilities like this will be used to house and control microgravity experiments on the space station. It is hoped that astronauts and instructors will ultimately be able to share common virtual environments and, using telephone links, conduct interactive training from separate locations.

  6. System analysis study of space platform and station accommodations for life sciences research facilities. Volume 2: Study results. Appendix D: Life sciences research facility requirements

    NASA Technical Reports Server (NTRS)

    Wiley, Lowell F.

    1985-01-01

    The purpose of this requirements document is to develop the foundation for concept development for the Life Sciences Research Facility (LSRF) on the Space Station. These requirements are developed from the perspective of a Space Station laboratory module outfitter. Science and mission requirements including those related to specimens are set forth. System requirements, including those for support, are detailed. Functional and design requirements are covered in the areas of structures, mechanisms, electrical power, thermal systems, data management system, life support, and habitability. Finally, interface requirements for the Command Module and Logistics Module are described.

  7. A hydrologic retention system and water quality monitoring program for a human decomposition research facility: concept and design.

    PubMed

    Wozniak, Jeffrey R; Thies, Monte L; Bytheway, Joan A; Lutterschmidt, William I

    2015-01-01

    Forensic taphonomy is an essential research field; however, the decomposition of human cadavers at forensic science facilities may lead to nutrient loading and the introduction of unique biological compounds to adjacent areas. The infrastructure of a water retention system may provide a mechanism for the biogeochemical processing and retention of nutrients and compounds, ensuring the control of runoff from forensic facilities. This work provides a proof of concept for a hydrologic retention system and an autonomous water quality monitoring program designed to mitigate runoff from The Southeast Texas Applied Forensic Science (STAFS) Facility. Water samples collected along a sample transect were analyzed for total phosphorous, total nitrogen, NO3-, NO2-, NH4, F(-), and Cl(-). Preliminary water quality analyses confirm the overall effectiveness of the water retention system. These results are discussed with relation to how this infrastructure can be expanded upon to monitor additional, more novel, byproducts of forensic science research facilities. © 2014 American Academy of Forensic Sciences.

  8. Earth Science Education in Zimbabwe

    NASA Astrophysics Data System (ADS)

    Walsh, Kevin L.

    1999-05-01

    Zimbabwe is a mineral-rich country with a long history of Earth Science Education. The establishment of a University Geology Department in 1960 allowed the country to produce its own earth science graduates. These graduates are readily absorbed by the mining industry and few are without work. Demand for places at the University is high and entry standards reflect this. Students enter the University after GCE A levels in three science subjects and most go on to graduate. Degree programmes include B.Sc. General in Geology (plus another science), B.Sc. Honours in Geology and M.Sc. in Exploration Geology and in Geophysics. The undergraduate curriculum is broad-based and increasingly vocationally orientated. A well-equipped building caters for relatively large student numbers and also houses analytical facilities used for research and teaching. Computers are used in teaching from the first year onwards. Staff are on average poorly qualified compared to other universities, but there is an impressive research element. The Department has good links with many overseas universities and external funding agencies play a strong supporting role. That said, financial constraints remain the greatest barrier to future development, although increasing links with the mining industry may cushion this.

  9. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  10. Career Resources

    Science.gov Websites

    Collaboration Careers Community Environment Science & Innovation Facilities Science Pillars Research Library Science Briefs Science News Science Highlights Lab Organizations Science Programs Applied Energy Programs

  11. New Hire

    Science.gov Websites

    Collaboration Careers Community Environment Science & Innovation Facilities Science Pillars Research Library Science Briefs Science News Science Highlights Lab Organizations Science Programs Applied Energy Programs

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van der Eide, Edwin F.; Yang, Ping; Walter, Eric D.

    Unlike the very labile, unobservable radical cations [{l_brace}CpM(CO){sub 3}{r_brace}{sub 2}]{sup {sm_bullet}+} (M = W, Mo), derivatives [{l_brace}CpM(CO){sub 2}(PMe{sub 3}){r_brace}{sub 2}]{sup {sm_bullet}+} are stable enough to be isolated and characterized. Experimental and theoretical studies show that the shortened M-M bonds are of order 1 1/2, and that they are not supported by bridging ligands. The unpaired electron is fully delocalized, with a spin density of ca. 45% on each metal atom. We thank the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Biosciences and Geosciences for support of this work. Pacific Northwestmore » National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. The EPR and computational studies were performed using EMSL, a national scientific user facility sponsored by the DOE's Office of Biological and Environmental Research and located at PNNL. We thank Dr. Charles Windisch for access to his UV-Vis-NIR spectrometer.« less

  13. The creation and early implementation of a high speed fiber optic network for a university health sciences center.

    PubMed Central

    Schueler, J. D.; Mitchell, J. A.; Forbes, S. M.; Neely, R. C.; Goodman, R. J.; Branson, D. K.

    1991-01-01

    In late 1989 the University of Missouri Health Sciences Center began the process of creating an extensive fiber optic network throughout its facilities, with the intent to provide networked computer access to anyone in the Center desiring such access, regardless of geographic location or organizational affiliation. A committee representing all disciplines within the Center produced and, in conjunction with independent consultants, approved a comprehensive design for the network. Installation of network backbone components commenced in the second half of 1990 and was completed in early 1991. As the network entered its initial phases of operation, the first realities of this important new resource began to manifest themselves as enhanced functional capacity in the Health Sciences Center. This paper describes the development of the network, with emphasis on its design criteria, installation, early operation, and management. Also included are discussions on its organizational impact and its evolving significance as a medical community resource. PMID:1807660

  14. Bridging the PSI Knowledge Gap: A Multi-Scale Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wirth, Brian D.

    2015-01-08

    Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences,more » while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de-coupled extrapolation to a multi-scale, coupled approach. The PSI Plasma Center consisted of three equal co-centers; one located at the MIT Plasma Science and Fusion Center, one at UC San Diego Center for Energy Research and one at the UC Berkeley Department of Nuclear Engineering, which moved to the University of Tennessee, Knoxville (UTK) with Professor Brian Wirth in July 2010. The Center had three co-directors: Prof. Dennis Whyte led the MIT co-center, the UCSD co-center was led by Dr. Russell Doerner, and Prof. Brian Wirth led the UCB/UTK center. The directors have extensive experience in PSI and material research, and have been internationally recognized in the magnetic fusion, materials and plasma research fields. The co-centers feature keystone PSI experimental and modeling facilities dedicated to PSI science: the DIONISOS/CLASS facility at MIT, the PISCES facility at UCSD, and the state-of-the-art numerical modeling capabilities at UCB/UTK. A collaborative partner in the center is Sandia National Laboratory at Livermore (SNL/CA), which has extensive capabilities with low energy ion beams and surface diagnostics, as well as supporting plasma facilities, including the Tritium Plasma Experiment, all of which significantly augment the Center. Interpretive, continuum material models are available through SNL/CA, UCSD and MIT. The participating institutions of MIT, UCSD, UCB/UTK, SNL/CA and LLNL brought a formidable array of experimental tools and personnel abilities into the PSI Plasma Center. Our work has focused on modeling activities associated with plasma surface interactions that are involved in effects of He and H plasma bombardment on tungsten surfaces. This involved performing computational material modeling of the surface evolution during plasma bombardment using molecular dynamics modeling. The principal outcomes of the research efforts within the combined experimental – modeling PSI center are to provide a knowledgebase of the mechanisms of surface degradation, and the influence of the surface on plasma conditions.« less

  15. The ASCI Network for SC 2000: Gigabyte Per Second Networking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PRATT, THOMAS J.; NAEGLE, JOHN H.; MARTINEZ JR., LUIS G.

    2001-11-01

    This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstratedmore » an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.« less

  16. Fermilab | Science | Particle Accelerators | Advanced Superconducting Test

    Science.gov Websites

    Accelerators for science and society Particle Physics 101 Science of matter, energy, space and time How Technology (FAST) Facility is America's only test bed for cutting-edge, record high-intensity particle beams in the United States as a particle beam research facility based on superconducting radio-frequency

  17. The Mars Science Laboratory Touchdown Test Facility

    NASA Technical Reports Server (NTRS)

    White, Christopher; Frankovich, John; Yates, Phillip; Wells Jr, George H.; Losey, Robert

    2009-01-01

    In the Touchdown Test Program for the Mars Science Laboratory (MSL) mission, a facility was developed to use a full-scale rover vehicle and an overhead winch system to replicate the Skycrane landing event.

  18. Exploring the Relationships between Self-Efficacy and Preference for Teacher Authority among Computer Science Majors

    ERIC Educational Resources Information Center

    Lin, Che-Li; Liang, Jyh-Chong; Su, Yi-Ching; Tsai, Chin-Chung

    2013-01-01

    Teacher-centered instruction has been widely adopted in college computer science classrooms and has some benefits in training computer science undergraduates. Meanwhile, student-centered contexts have been advocated to promote computer science education. How computer science learners respond to or prefer the two types of teacher authority,…

  19. Preliminary design study for an atomospheric science facility

    NASA Technical Reports Server (NTRS)

    Hutchison, R.

    1972-01-01

    The activities and results of the Atmospheric Science Facility preliminary design study are reported. The objectives of the study were to define the scientific goals, to determine the range of experiment types, and to develop the preliminary instrument design requirements for a reusable, general purpose, optical research facility for investigating the earth's atmosphere from a space shuttle orbital vehicle.

  20. Making of the NSTX Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Neumeyer; M. Ono; S.M. Kaye

    1999-11-01

    The NSTX (National Spherical Torus Experiment) facility located at Princeton Plasma Physics Laboratory is the newest national fusion science experimental facility for the restructured US Fusion Energy Science Program. The NSTX project was approved in FY 97 as the first proof-of-principle national fusion facility dedicated to the spherical torus research. On Feb. 15, 1999, the first plasma was achieved 10 weeks ahead of schedule. The project was completed on budget and with an outstanding safety record. This paper gives an overview of the NSTX facility construction and the initial plasma operations.

  1. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  2. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less

  3. Human Exploration Ethnography of the Haughton-Mars Project, 1998-1999

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Swanson, Keith (Technical Monitor)

    1999-01-01

    During the past two field seasons, July 1988 and 1999, we have conducted research about the field practices of scientists and engineers at Haughton Crater on Devon Island in the Canadian Arctic, with the objective of determining how people will live and work on Mars. This broad investigation of field life and work practice, part of the Haughton-Mars Project lead by Pascal Lee, spans social and cognitive anthropology, psychology, and computer science. Our approach involves systematic observation and description of activities, places, and concepts, constituting an ethnography of field science at Haughton. Our focus is on human behaviors-what people do, where, when, with whom, and why. By locating behavior in time and place-in contrast with a purely functional or "task oriented" description of work-we find patterns constituting the choreography of interaction between people, their habitat, and their tools. As such, we view the exploration process in terms of a total system comprising a social organization, facilities, terrain/climate, personal identities, artifacts, and computer tools. Because we are computer scientists seeking to develop new kinds of tools for living and working on Mars, we focus on the existing representational tools (such as documents and measuring devices), learning and improvization (such as use of the internet or informal assistance), and prototype computational systems brought to the field. Our research is based on partnership, by which field scientists and engineers actively contribute to our findings, just as we participate in their work and life.

  4. EOSDIS - Its role in the EOS program and its importance to the scientific community. [Data and Information System

    NASA Technical Reports Server (NTRS)

    Price, Robert D.; Pedelty, Kathleen S.; Ardanuy, Philip E.; Hobish, Mitchell K.

    1993-01-01

    In order to manage the global data sets required to understand the earth as a system, the EOS Data and Information System (EOSDIS) will collect and store satellite, aircraft, and in situ measurements and their resultant data products, and will distribute the data conveniently. EOSDIS will also provide product generation and science computing facilities to support the development, processing, and validation of standard EOS science data products. The overall architecture of EOSDIS, and how the Distributed Active Archive Centers fit into that structure, are shown. EOSDIS will enable users to query data bases nationally, make use of keywords and other mnemonic identifiers, and see graphic images of subsets of available data prior to ordering full (or selected pieces of) data sets for use in their 'home' environment.

  5. Science on the International Space Station: Stepping Stones for Exploration

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.

    2007-01-01

    This viewgraph presentation reviews the state of science research on the International Space Station (ISS). The shuttle and other missions that have delivered science research facilities to the ISS are shown. The different research facilities provided by both NASA and partner organizations available for use and future facilities are reviewed. The science that has been already completed is discussed. The research facilitates the Vision for Space Exploration, in Human Life Sciences, Biological Sciences, Materials Science, Fluids Science, Combustion Science, and all other sciences. The ISS Focus for NASA involves: Astronaut health and countermeasure, development to protect crews from the space environment during long duration voyages, Testing research and technology developments for future exploration missions, Developing and validating operational procedures for long-duration space missions. The ISS Medical Project (ISSMP) address both space systems and human systems. ISSMP has been developed to maximize the utilization of ISS to obtain solutions to the human health and performance problems and the associated mission risks of exploration class missions. Including complete programmatic review with medical operations (space medicine/flight surgeons) to identify: (1) evidence base on risks (2) gap analysis.

  6. Academic computer science and gender: A naturalistic study investigating the causes of attrition

    NASA Astrophysics Data System (ADS)

    Declue, Timothy Hall

    Far fewer women than men take computer science classes in high school, enroll in computer science programs in college, or complete advanced degrees in computer science. The computer science pipeline begins to shrink for women even before entering college, but it is at the college level that the "brain drain" is the most evident numerically, especially in the first class taken by most computer science majors called "Computer Science 1" or CS-I. The result, for both academia and industry, is a pronounced technological gender disparity in academic and industrial computer science. The study revealed the existence of several factors influencing success in CS-I. First, and most clearly, the effect of attribution processes seemed to be quite strong. These processes tend to work against success for females and in favor of success for males. Likewise, evidence was discovered which strengthens theories related to prior experience and the perception that computer science has a culture which is hostile to females. Two unanticipated themes related to the motivation and persistence of successful computer science majors. The findings did not support the belief that females have greater logistical problems in computer science than males, or that females tend to have a different programming style than males which adversely affects the females' ability to succeed in CS-I.

  7. Comparison of perceptions among rural versus nonrural secondary science teachers: A multistate survey

    NASA Astrophysics Data System (ADS)

    Baird, William E.; Preston Prather, J.; Finson, Kevin D.; Oliver, J. Steve

    A 100-item survey was distributed to science teachers in eight states to determine characteristics of teachers, schools, programs, and perceived needs. Results from 1258 secondary science teachers indicate that they perceive the following to be among their greatest needs: (1) to motivate students to want to learn science; (2) to discover sources of free and inexpensive science materials; (3) to learn more about how to use computers to deliver and manage instruction; (4) to find and use materials about science careers; and (5) to improve problem solving skills among their students. Based on whether teachers classified themselves as nonrural or rural, rural teachers do not perceive as much need for help with multicultural issues in the classroom or maintaining student discipline as their nonrural peers. Rural teachers report using the following classroom activities less often than nonrural teachers: cooperative learning groups, hands-on laboratory activities, individualized strategies, and inquiry teaching. More rural than nonrural teachers report problems with too many class preparations per day, a lack of career role models in the community, and lack of colleagues with whom to discuss problems. Among all secondary science teachers, the most pronounced problems reported by teachers were (in rank order): (1) insufficient student problem-solving skills; (2) insufficient funds for supplies; (3) poor student reading ability; (4) lack of student interest in science: and (5) inadequate laboratory facilities.

  8. Executive control systems in the engineering design environment

    NASA Technical Reports Server (NTRS)

    Hurst, P. W.; Pratt, T. W.

    1985-01-01

    Executive Control Systems (ECSs) are software structures for the unification of various engineering design application programs into comprehensive systems with a central user interface (uniform access) method and a data management facility. Attention is presently given to the most significant determinations of a research program conducted for 24 ECSs, used in government and industry engineering design environments to integrate CAD/CAE applications programs. Characterizations are given for the systems' major architectural components and the alternative design approaches considered in their development. Attention is given to ECS development prospects in the areas of interdisciplinary usage, standardization, knowledge utilization, and computer science technology transfer.

  9. KSC-07pd1241

    NASA Image and Video Library

    2007-05-17

    KENNEDY SPACE CENTER, FLA. -- In the Astrotech Space Operations facility, Orbital Science technicians verify that a computer chip is securely bonded to a side brace on the Dawn spacecraft. The silicon chip holds the names of more than 360,000 space enthusiasts worldwide who signed up to participate in a virtual voyage to the asteroid belt and is about the size of an American five-cent coin. Dawn's mission is to explore two of the asteroid belt's most intriguing and dissimilar occupants: asteroid Vesta and the dwarf planet Ceres. Dawn is scheduled to launch June 30 from Launch Complex 17-B. Photo credit: NASA/George Shelton

  10. Microbes to Biomes at Berkeley Lab

    ScienceCinema

    None

    2018-06-21

    Microbes are the Earth's most abundant and diverse form of life. Berkeley Lab's Microbes to Biomes initiative -- which will take advantage of research expertise at the Joint Genome Institute, Advanced Light Source, Molecular Foundry, and the new computational science facility -- is designed to explore and reveal the interactions of microbes with one another and with their environment. Microbes power our planet’s biogeochemical cycles, provide nutrients to our plants, purify our water and are integral components in keeping the human body free of disease and may hold the key to the Earth’s future.

  11. SPHERES Facility

    NASA Technical Reports Server (NTRS)

    Martinez, Andres; Benavides, Jose Victor; Ormsby, Steve L.; GuarnerosLuna, Ali

    2014-01-01

    Synchronized Position Hold, Engage, Reorient, Experimental Satellites (SPHERES) are bowling-ball sized satellites that provide a test bed for development and research into multi-body formation flying, multi-spacecraft control algorithms, and free-flying physical and material science investigations. Up to three self-contained free-flying satellites can fly within the cabin of the International Space Station (ISS), performing flight formations, testing of control algorithms or as a platform for investigations requiring this unique free-flying test environment. Each satellite is a self-contained unit with power, propulsion, computers, navigation equipment, and provides physical and electrical connections (via standardized expansion ports) for Principal Investigator (PI) provided hardware and sensors.

  12. UK to train 100 PhD students in data science

    NASA Astrophysics Data System (ADS)

    Allen, Michael

    2017-12-01

    A new PhD programme to develop techniques to handle the vast amounts of data being generated by experiments and facilities has been launched by the UK's Science and Technology Facilities Council (STFC).

  13. A New Direction for the NASA Materials Science Research Using the International Space Station

    NASA Technical Reports Server (NTRS)

    Schlagheck, Ronald A.; Stinson, Thomas N. (Technical Monitor)

    2002-01-01

    In 2001 NASA created a fifth Strategic Enterprise, the Office of Biological and Physical Research (OBPR), to bring together physics, chemistry, biology, and engineering to foster interdisciplinary research. The Materials Science Program is one of five Microgravity Research disciplines within this new Enterprise's Division of Physical Sciences Research. The Materials Science Program will participate within this new enterprise structure in order to facilitate effective use of ISS facilities, target scientific and technology questions and transfer results for Earth benefits. The Materials Science research will use a low gravity environment for flight and ground-based research in crystallization, fundamental processing, properties characterization, and biomaterials in order to obtain fundamental understanding of various phenomena effects and relationships to the structures, processing, and properties of materials. Completion of the International Space Station's (ISS) first major assembly, during the past year, provides new opportunities for on-orbit research and scientific utilization. The Enterprise has recently completed an assessment of the science prioritization from which the future materials science ISS type payloads will be implemented. Science accommodations will support a variety of Materials Science payload hardware both in the US and international partner modules with emphasis on early use of Express Rack and Glovebox facilities. This paper addresses the current scope of the flight and ground investigator program. These investigators will use the various capabilities of the ISS lab facilities to achieve their research objectives. The type of research and classification of materials being studied will be addressed. This includes the recent emphasis being placed on radiation shielding, nanomaterials, propulsion materials, and biomaterials type research. The Materials Science Program will pursue a new, interdisciplinary approach, which contributes, to Human Space Flight Exploration research. The Materials Science Research Facility (MSRF) and other related American and International experiment modules will serve as the foundation for the flight research environment. A summary will explain the concept for materials science research processing capabilities aboard the ISS along with the various ground facilities necessary to support the program.

  14. Digitization of multistep organic synthesis in reactionware for on-demand pharmaceuticals.

    PubMed

    Kitson, Philip J; Marie, Guillaume; Francoia, Jean-Patrick; Zalesskiy, Sergey S; Sigerson, Ralph C; Mathieson, Jennifer S; Cronin, Leroy

    2018-01-19

    Chemical manufacturing is often done at large facilities that require a sizable capital investment and then produce key compounds for a finite period. We present an approach to the manufacturing of fine chemicals and pharmaceuticals in a self-contained plastic reactionware device. The device was designed and constructed by using a chemical to computer-automated design (ChemCAD) approach that enables the translation of traditional bench-scale synthesis into a platform-independent digital code. This in turn guides production of a three-dimensional printed device that encloses the entire synthetic route internally via simple operations. We demonstrate the approach for the γ-aminobutyric acid receptor agonist, (±)-baclofen, establishing a concept that paves the way for the local manufacture of drugs outside of specialist facilities. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  15. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Manuel, Mario; Keiter, Paul; Drake, R. P.

    2014-10-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, imploding bubbles, and interacting jet experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via Grant DEFC52-08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0001840, and by the National Laser User Facility Program, Grant Number DE-NA0000850.

  16. [Organization of clinical research: in general and visceral surgery].

    PubMed

    Schneider, M; Werner, J; Weitz, J; Büchler, M W

    2010-04-01

    The structural organization of research facilities within a surgical university center should aim at strengthening the department's research output and likewise provide opportunities for the scientific education of academic surgeons. We suggest a model in which several independent research groups within a surgical department engage in research projects covering various aspects of surgically relevant basic, translational or clinical research. In order to enhance the translational aspects of surgical research, a permanent link needs to be established between the department's scientific research projects and its chief interests in clinical patient care. Importantly, a focus needs to be placed on obtaining evidence-based data to judge the efficacy of novel diagnostic and treatment concepts. Integration of modern technologies from the fields of physics, computer science and molecular medicine into surgical research necessitates cooperation with external research facilities, which can be strengthened by coordinated support programs offered by research funding institutions.

  17. Computer-Game Construction: A Gender-Neutral Attractor to Computing Science

    ERIC Educational Resources Information Center

    Carbonaro, Mike; Szafron, Duane; Cutumisu, Maria; Schaeffer, Jonathan

    2010-01-01

    Enrollment in Computing Science university programs is at a dangerously low level. A major reason for this is the general lack of interest in Computing Science by females. In this paper, we discuss our experience with using a computer game construction environment as a vehicle to encourage female participation in Computing Science. Experiments…

  18. Centre for Research Infrastructure of Polish GNSS Data - response and possible contribution to EPOS

    NASA Astrophysics Data System (ADS)

    Araszkiewicz, Andrzej; Rohm, Witold; Bosy, Jaroslaw; Szolucha, Marcin; Kaplon, Jan; Kroszczynski, Krzysztof

    2017-04-01

    In the frame of the first call under Action 4.2: Development of modern research infrastructure of the science sector in the Smart Growth Operational Programme 2014-2020 in the late of 2016 the "EPOS-PL" project has launched. Following institutes are responsible for the implementation of this project: Institute of Geophysics, Polish Academy of Sciences - Project Leader, Academic Computer Centre Cyfronet AGH University of Science and Technology, Central Mining Institute, the Institute of Geodesy and Cartography, Wrocław University of Environmental and Life Sciences, Military University of Technology. In addition, resources constituting entrepreneur's own contribution will come from the Polish Mining Group. Research Infrastructure EPOS-PL will integrate both existing and newly built National Research Infrastructures (Theme Centre for Research Infrastructures), which, under the premise of the program EPOS, are financed exclusively by the national founds. In addition, the e-science platform will be developed. The Centre for Research Infrastructure of GNSS Data (CIBDG - Task 5) will be built based on the experience and facilities of two institutions: Military University of Technology and Wrocław University of Environmental and Life Sciences. The project includes the construction of the National GNNS Repository with data QC procedures and adaptation of two Regional GNNS Analysis Centres for rapid and long-term geodynamical monitoring.

  19. EUDAT: A New Cross-Disciplinary Data Infrastructure For Science

    NASA Astrophysics Data System (ADS)

    Lecarpentier, Damien; Michelini, Alberto; Wittenburg, Peter

    2013-04-01

    In recent years significant investments have been made by the European Commission and European member states to create a pan-European e-Infrastructure supporting multiple research communities. As a result, a European e-Infrastructure ecosystem is currently taking shape, with communication networks, distributed grids and HPC facilities providing European researchers from all fields with state-of-the-art instruments and services that support the deployment of new research facilities on a pan-European level. However, the accelerated proliferation of data - newly available from powerful new scientific instruments, simulations and the digitization of existing resources - has created a new impetus for increasing efforts and investments in order to tackle the specific challenges of data management, and to ensure a coherent approach to research data access and preservation. EUDAT is a pan-European initiative that started in October 2011 and which aims to help overcome these challenges by laying out the foundations of a Collaborative Data Infrastructure (CDI) in which centres offering community-specific support services to their users could rely on a set of common data services shared between different research communities. Although research communities from different disciplines have different ambitions and approaches - particularly with respect to data organization and content - they also share many basic service requirements. This commonality makes it possible for EUDAT to establish common data services, designed to support multiple research communities, as part of this CDI. During the first year, EUDAT has been reviewing the approaches and requirements of a first subset of communities from linguistics (CLARIN), solid earth sciences (EPOS), climate sciences (ENES), environmental sciences (LIFEWATCH), and biological and medical sciences (VPH), and shortlisted four generic services to be deployed as shared services on the EUDAT infrastructure. These services are data replication from site to site, data staging to compute facilities, metadata, and easy storage. A number of enabling services such as distributed authentication and authorization, persistent identifiers, hosting of services, workspaces and centre registry were also discussed. The services being designed in EUDAT will thus be of interest to a broad range of communities that lack their own robust data infrastructures, or that are simply looking for additional storage and/or computing capacities to better access, use, re-use, and preserve their data. The first pilots were completed in 2012 and a pre-production ready operational infrastructure, comprised of five sites (RZG, CINECA, SARA, CSC, FZJ), offering 480TB of online storage and 4PB of near-line (tape) storage, initially serving four user communities (ENES, EPOS, CLARIN, VPH) was established. These services shall be available to all communities in a production environment by 2014. Although EUDAT has initially focused on a subset of research communities, it aims to engage with other communities interested in adapting their solutions or contributing to the design of the infrastructure. Discussions with other research communities - belonging to the fields of environmental sciences, biomedical science, physics, social sciences and humanities - have already begun and are following a pattern similar to the one we adopted with the initial communities. The next step will consist of integrating representatives from these communities into the existing pilots and task forces so as to include them in the process of designing the services and, ultimately, shaping the future CDI.

  20. HILL: The High-Intensity Laser Laboratory Core Team's Reply to Questions from the NNSA Experimental Facilities Panel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, B J

    2012-08-02

    Question 1 - The type of physics regimes that HILL can access for weapons studies is quite interesting. The question that arises for the proposal team is what priority does this type of experimental data have versus data that can be obtained with NIF, and Z. How does HILL rank in priority compared to MARIE 1.0 in terms of the experimental data it will provide? We reiterate that isochoric heating experiments to be conducted with HILL are complementary to the high energy density physics experiments at NIF and Z and uniquely access states of matter that neither other facility canmore » access. It is our belief that HILL will enable several important questions, e.g., as related to mix morphology, radiation transfer from corrugated surfaces, and equations of state, to be run to ground through carefully diagnosed, 'unit-physics' experiments. Such experiments will substantially improve confidence in our computer models and provide a rigorous science basis for certification. Question 2 - A secondary question relates to the interests of LLNL and SNL in the physics that HILL can address. This should be spelled out clearly. I would like to see the other labs be part of the discussion regarding how important this capability would be if built. Both sister Labs have a keen interest in the physics enabled by high-intensity, high-energy lasers, as evinced by the Z Petawatt and NIF ARC upgrades to their signature facilities. LANL scientists have teamed with scientists from both Laboratories in high-intensity laser 'first experiments' envisioned for HILL and we fully intend to continue these profitable discussions going forward. In the preparation of the HILL proposal, feedback was solicited from the broader HEDP and weapons science communities. The consensus view was that HILL filled a critical gap and that there was a need for a facility like HILL to address outstanding questions in weapons science. It was recognized that co-location of HILL with a facility such as MaRIE 1.0, Z, NIF, or Omega may offer additional advantages and we would expect these to be explored and evaluated during the CD process. Question 3 - A laser/optics experts group should review this proposal to ensure the level of R&D is reasonable to provide a sufficient chance of success (>50%). In the preparation of the HILL proposal, we sent our proposal and cost estimates to laser designers/scientists across the complex. Though risks were identified with our design, the prevailing view of those we engaged was that the risks were appropriately represented by the TRL levels assigned and that the enabling R&D planned in our proposal was adequate for risk mitigation. Question 4 - More data and peer review is needed from its sister facilities around the world. It is our specific intent to conduct both scientific and technical workshops with the user community if the High Intensity Science field is further encouraged as part of the NNSA Roadmap. Question 5 - Does HILL have to be co-located with MARIE 1.0? Is that feasible from the point of view of TA-53 real estate? Multiple siting options were considered for HILL, including co-location with MaRIE 1.0 (the most cost-effective and flexible option), as well as in a separate, stand-alone building and in a retro-fitted existing building. The cost estimate included these contingencies and candidate locations for HILL in TA-53 were identified. There is actually significant space at TA-53 on the hill in the northeast end of the mesa. Question 6 - What would be the impact on the weapons program if this facility were NOT built? An inability to elucidate aspects of weapons science in the dense plasma regime and validate computer models for same. This will lead to reduced confidence in the computer tools used for certification. Question 7 - Will HILL allow some of the x-ray vulnerability studies proposed by SPARC? If so what does Sandia's vulnerability group think of this method versus SPARC. It is possible that some of the scope envisioned for SPARC could be achieved on HILL, although likely that the energy produced at HILL not being at all close to requirements. We would welcome these discussions with our SNL colleagues. Question 8 - The committee had the opinion that present laser facilities could better be modified to meet this mission need. HILL satisfies a mission need for rapid isochoric heating of materials into conditions relevant to boost with quantitative control of the variables. This is accomplished through particle generation and acceleration mechanisms that require ultra-short (sub-100 femtosecond, we estimate actually sub-30 femtosecond) laser pulses. To generate such very short pulses, high bandwidth is required in the laser system. However, such bandwidth is not possible with current high-energy glass laser systems, so new lasers must be built to meet this requirement.« less

  1. Microgravity science and applications: Apparatus and facilities

    NASA Technical Reports Server (NTRS)

    1989-01-01

    NASA support apparatus and facilities for microgravity research are summarized in fact sheets. The facilities are ground-based simulation environments for short-term experiments, and the shuttle orbiter environment for long duration experiments. The 17 items of the microgravitational experimental apparatus are described. Electronic materials, alloys, biotechnology, fluid dynamics and transport phenomena, glasses and ceramics, and combustion science are among the topics covered.

  2. Proposed BISOL Facility - a Conceptual Design

    NASA Astrophysics Data System (ADS)

    Ye, Yanlin

    2018-05-01

    In China, a new large-scale nuclear-science research facility, namely the "Beijing Isotope-Separation-On-Line neutron-rich beam facility (BISOL)", has been proposed and reviewed by the governmental committees. This facility aims at both basic science and application goals, and is based on a double-driver concept. On the basic science side, the radioactive ion beams produced from the ISOL device, driven by a research reactor or by an intense deuteron-beam ac- celerator, will be used to study the new physics and technologies at the limit of the nuclear stability in the medium mass region. On the other side regarding to the applications, the facility will be devoted to the material research asso- ciated with the nuclear energy system, by using typically the intense neutron beams produced from the deuteron-accelerator driver. The initial design will be outlined in this report.

  3. Conceptual design and programmatics studies of space station accommodations for Life Sciences Research Facilities (LSRF)

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Conceptual designs and programmatics of the space station accommodations for the Life Sciences Research Facilities (LSRF) are presented. The animal ECLSS system for the LSRF provides temperature-humidity control, air circulation, and life support functions for experimental subjects. Three ECLSS were studied. All configurations presented satisfy the science requirements for: animal holding facilities with bioisolation; facilities interchangeable to hold rodents, small primates, and plants; metabolic cages interchangeable with standard holding cages; holding facilities adaptable to restrained large primates and rodent breeding/nesting cages; volume for the specified instruments; enclosed ferm-free workbench for manipulation of animals and chemical procedures; freezers for specimen storage until return; and centrifuge to maintain animals and plants at fractional g to 1 g or more, with potential for accommodating humans for short time intervals.

  4. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    ERIC Educational Resources Information Center

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  5. Instrument Systems Analysis and Verification Facility (ISAVF) users guide

    NASA Technical Reports Server (NTRS)

    Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.

    1985-01-01

    The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.

  6. National resource for computation in chemistry, phase I: evaluation and recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-05-01

    The National Resource for Computation in Chemistry (NRCC) was inaugurated at the Lawrence Berkeley Laboratory (LBL) in October 1977, with joint funding by the Department of Energy (DOE) and the National Science Foundation (NSF). The chief activities of the NRCC include: assembling a staff of eight postdoctoral computational chemists, establishing an office complex at LBL, purchasing a midi-computer and graphics display system, administering grants of computer time, conducting nine workshops in selected areas of computational chemistry, compiling a library of computer programs with adaptations and improvements, initiating a software distribution system, providing user assistance and consultation on request. This reportmore » presents assessments and recommendations of an Ad Hoc Review Committee appointed by the DOE and NSF in January 1980. The recommendations are that NRCC should: (1) not fund grants for computing time or research but leave that to the relevant agencies, (2) continue the Workshop Program in a mode similar to Phase I, (3) abandon in-house program development and establish instead a competitive external postdoctoral program in chemistry software development administered by the Policy Board and Director, and (4) not attempt a software distribution system (leaving that function to the QCPE). Furthermore, (5) DOE should continue to make its computational facilities available to outside users (at normal cost rates) and should find some way to allow the chemical community to gain occasional access to a CRAY-level computer.« less

  7. Health sciences libraries in Kuwait: a study of their resources, facilities, and services

    PubMed Central

    Al-Ansari, Husain A.; Al-Enezi, Sana

    2001-01-01

    The purpose of this study was to examine the current status of health sciences libraries in Kuwait in terms of their staff, collections, facilities, use of information technology, information services, and cooperation. Seventeen libraries participated in the study. Results show that the majority of health sciences libraries were established during the 1980s. Their collections are relatively small. The majority of their staff is nonprofessional. The majority of libraries provide only basic information services. Cooperation among libraries is limited. Survey results also indicate that a significant number of health sciences libraries are not automated. Some recommendations for the improvement of existing resources, facilities, and services are made. PMID:11465688

  8. NASA AMES Remote Operations Center for 2001

    NASA Technical Reports Server (NTRS)

    Sims, M.; Marshall, J.; Cox, S.; Galal, K.

    1999-01-01

    There is a Memorandum of Agreement between NASA Ames, JPL, West Virginia University and University of Arizona which led to funding for the MECA microscope and to the establishment of an Ames facility for science analysis of microscopic and other data. The data and analysis will be by agreement of the Mars Environmental Compatibility Assessment (MECA), Robotic Arm Camera (RAC) and other PI's. This facility is intended to complement other analysis efforts with one objective of this facility being to test the latest information technologies in support of actual mission science operations. Additionally, it will be used as a laboratory for the exploration of collaborative science activities. With a goal of enhancing the science return for both Human Exploration and Development of Space (HEDS) and Astrobiology we shall utilize various tools such as superresolution and the Virtual Environment Vehicle Interface (VEVI) virtual reality visualization tools. In this presentation we will describe the current planning for this facility.

  9. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  10. Parallel computation and the Basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1992-12-16

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  11. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  12. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  13. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  14. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  15. The planned Alaska SAR Facility - An overview

    NASA Technical Reports Server (NTRS)

    Carsey, Frank; Weeks, Wilford

    1987-01-01

    The Alaska SAR Facility (ASF) is described in an overview fashion. The facility consists of three major components, a Receiving Ground System, a SAR Processing System and an Analysis and Archiving System; the ASF Program also has a Science Working Team and the requisite management and operations systems. The ASF is now an approved and fully funded activity; detailed requirements and science background are presented for the facility to be implemented for data from the European ERS-1, the Japanese ERS-1 and Radarsat.

  16. Experience with a UNIX based batch computing facility for H1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.

    1994-12-31

    A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.

  17. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...

  18. Space plasma branch at NRL

    NASA Astrophysics Data System (ADS)

    The Naval Research Laboratory (Washington, D.C.) formed the Space Plasma Branch within its Plasma Physics Division on July 1. Vithal Patel, former Program Director of Magnetospheric Physics, National Science Foundation, also joined NRL on the same date as Associate Superintendent of the Plasma Physics Division. Barret Ripin is head of the newly organized branch. The Space Plasma branch will do basic and applied space plasma research using a multidisciplinary approach. It consolidates traditional rocket and satellite space experiments, space plasma theory and computation, with laboratory space-related experiments. About 40 research scientists, postdoctoral fellows, engineers, and technicians are divided among its five sections. The Theory and Computation sections are led by Joseph Huba and Joel Fedder, the Space Experiments section is led by Paul Rodriguez, and the Pharos Laser Facility and Laser Experiments sections are headed by Charles Manka and Jacob Grun.

  19. KSC-97PC1406

    NASA Image and Video Library

    1997-09-23

    Boeing technicians, from right, John Pearce Jr., Mike Vawter and Rob Ferraro prepare a Russian replacement computer for stowage aboard the Space Shuttle Atlantis shortly before the scheduled launch of Mission STS-86, slated to be the seventh docking of the Space Shuttle with the Russian Space Station Mir. The preparations are being made at the SPACEHAB Payload Processing Facility in Cape Canaveral. The last-minute cargo addition requested by the Russians will be mounted on the aft bulkhead of the SPACEHAB Double Module, which is being used as a pressurized cargo container for science/logistical equipment and supplies that will be exchanged between Atlantis and the Mir. Using the Module Vertical Access Kit (MVAC), technicians will be lowered inside the module to install the computer for flight. Liftoff of STS-86 is scheduled Sept. 25 at 10:34 p.m. from Launch Pad 39A

  20. KSC-97PC1405

    NASA Image and Video Library

    1997-09-23

    Boeing technicians John Pearce Jr., at left, and Mike Vawter prepare a Russian replacement computer for stowage aboard the Space Shuttle Atlantis shortly before the scheduled launch of Mission STS-86, slated to be the seventh docking of the Space Shuttle with the Russian Space Station Mir. The preparations are being made at the SPACEHAB Payload Processing Facility in Cape Canaveral. The last-minute cargo addition requested by the Russians will be mounted on the aft bulkhead of the SPACEHAB Double Module, which is being used as a pressurized cargo container for science/logistical equipment and supplies that will be exchanged between Atlantis and the Mir. Using the Module Vertical Access Kit (MVAC), technicians will be lowered inside the module to install the computer for flight. Liftoff of STS-86 is scheduled Sept. 25 at 10:34 p.m. from Launch Pad 39A

  1. A new approach to the design of information systems for foodservice management in health care facilities.

    PubMed

    Matthews, M E; Norback, J P

    1984-06-01

    An organizational framework for integrating foodservice data into an information system for management decision making is presented. The framework involves the application to foodservice of principles developed by the disciplines of managerial economics and accounting, mathematics, computer science, and information systems. The first step is to conceptualize a foodservice system from an input-output perspective, in which inputs are units of resources available to managers and outputs are servings of menu items. Next, methods of full cost accounting, from the management accounting literature, are suggested as a mechanism for developing and assigning costs of using resources within a foodservice operation. Then matrix multiplication is used to illustrate types of information that matrix data structures could make available for management planning and control when combined with a conversational mode of computer programming.

  2. Data-intensive science gateway for rock physicists and volcanologists.

    NASA Astrophysics Data System (ADS)

    Filgueira, Rosa; Atkinson, Malcom; Bell, Andrew; Main, Ian; Boon, Steve; Meredith, Philp; Kilburn, Christopher

    2014-05-01

    Scientists have always shared data and mathematical models of the phenomena they study. Rock physics and Volcanology, as well as other solid-Earth sciences, have increasingly used Internet communications and computational renditions of their models for this purpose over the last two decades. Here we consider how to organise rock physics and volcanology data to open up opportunities for sharing and comparing both experiment data from experiments, observations and model runs and analytic interpretations of these data. Our hypothesis is that if we facilitate productive information sharing across those communities by using a new science gateway, it will benefit the science. The proposed science gateway should make the first steps for making existing research practices easier and facilitate new research. It will achieve this by supporting three major functions: 1) sharing data from laboratories and observatories, experimental facilities and models; 2) sharing models of rock fracture and methods for analysing experimental data; and 3) supporting recurrent operational tasks, such as data collection and model application in real time. We report initial work in two projects (NERC EFFORT and NERC CREEP-2) and experience with an early web-accessible protytpe called EFFORT gateway, where we are implementing such information sharing services for those projects. 1. Sharing data: In EFFORT gateway, we are working on several facilities for sharing data: *Upload data: We have designed and developed a new adaptive data transfer java tool called FAST (Flexible Automated Streaming Transfer) to upload experimental data and metadata periodically from laboratories to our repository. *Visualisation: As data are deposited in the repository, a visualisation of the accumulated data is made available for display in the Web portal. *Metadata and catalogues: The gateway uses a repository to hold all the data and a catalogue to hold all the corresponding metadata. 2. Sharing models and methods: The EFFORT gateway uses a repository to hold all of the models and a catalogue to hold the corresponding metadata. It provides several Web facilities for uploading, accessing and testing models. *Upload and store models: Through the gateway, researchers can upload as many models to the repository as they want. *Description of models: The gateway solicits and creates metadata for every model uploaded to store in the catalogue. *Search for models: Researchers can search the catalogue for models by using prepackaged sql-queries. *Access to models: Once a researcher has selected the model(s) that is going to be used for analysing an experiment, it will be obtained from the gateway. *Services to test and run models: Once a researcher selects a model and the experimental data to which it should be applied, the gateway submits the corresponding computational job to a high-performance computational (HPC) resource hiding technical details. Once a job is submitted to the HPC cluster, the results are displayed in the gateway in real time, catalogued and stored in the data repository, allowing further researcher-instigated operations to retrieve, inspect and aggregate results. *Services to write models: We have desgined VarPy library, which is an open-source toolbox which provides a Python framework for analysing volcanology and rock physics data. It provides several functions, which allow users to define their own workflows to develop models, analyses and visualizations. 3. Recurrent Operations: We have started to introduce some recurrent operations: *Automated data upload: FAST provides a mechanism to automate the data upload. *Periodic activation of models: The EFFORT gateway allows researchers to run different models periodically against the experimental data that are being or have been uploaded

  3. A Financial Technology Entrepreneurship Program for Computer Science Students

    ERIC Educational Resources Information Center

    Lawler, James P.; Joseph, Anthony

    2011-01-01

    Education in entrepreneurship is becoming a critical area of curricula for computer science students. Few schools of computer science have a concentration in entrepreneurship in the computing curricula. The paper presents Technology Entrepreneurship in the curricula at a leading school of computer science and information systems, in which students…

  4. The HEP Software and Computing Knowledge Base

    NASA Astrophysics Data System (ADS)

    Wenaus, T.

    2017-10-01

    HEP software today is a rich and diverse domain in itself and exists within the mushrooming world of open source software. As HEP software developers and users we can be more productive and effective if our work and our choices are informed by a good knowledge of what others in our community have created or found useful. The HEP Software and Computing Knowledge Base, hepsoftware.org, was created to facilitate this by serving as a collection point and information exchange on software projects and products, services, training, computing facilities, and relating them to the projects, experiments, organizations and science domains that offer them or use them. It was created as a contribution to the HEP Software Foundation, for which a HEP S&C knowledge base was a much requested early deliverable. This contribution will motivate and describe the system, what it offers, its content and contributions both existing and needed, and its implementation (node.js based web service and javascript client app) which has emphasized ease of use for both users and contributors.

  5. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    ERIC Educational Resources Information Center

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  6. Microgravity

    NASA Image and Video Library

    1998-05-01

    The Microgravity Science Glovebox is a facility for performing microgravity research in the areas of materials, combustion, fluids and biotechnology science. The facility occupies a full ISPR, consisting of: the ISPR rack and infrastructure for the rack, the glovebox core facility, data handling, rack stowage, outfitting equipment, and a video subsystem. MSG core facility provides the experiment developers a chamber with air filtering and recycling, up to two levels of containment, an airlock for transfer of payload equipment to/from the main volume, interface resources for the payload inside the core facility, resources inside the airlock, and storage drawers for MSG support equipment and consumables.

  7. Synergistic Effect of Nitrogen in Cobalt Nitride and Nitrogen-Doped Hollow Carbon Spheres for Oxygen Reduction Reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Xing; Liu, Lin; Jiang, Yu

    The need for inexpensive and high-activity oxygen reduction reaction (ORR) electrocatalysts has attracted considerable research interest over the past years. Here we report a novel hybrid that contains cobalt nitride/nitrogen-rich hollow carbon spheres (CoxN/NHCS) as a high-performance catalyst for ORR. The CoxN nanoparticles were uniformly dispersed and confined in the hollow NHCS shell. The performance of the resulting CoxN/NHCS hybrid was comparable with that of a commercial Pt/C at the same catalyst loading toward ORR, but the mass activity of the former was 5.7 times better than that of the latter. The nitrogen in both CoxN and NHCS, especially CoxN,more » could weaken the adsorption of reaction intermediates (O and OOH), which follows the favourable reaction pathway on CoxN/NHCS according to the DFT-calculated Gibbs free energy diagrams. Our results demonstrated a new strategy for designing and developing inexpensive, non-precious metal electrocatalysts for next-generation fuels. The authors acknowledge the financial support from the National Basic Research Program (973 program, No. 2013CB733501) and the National Natural Science Foundation of China (No. 21306169, 21101137, 21136001, 21176221 and 91334013). Dr. D. Mei is supported by the US Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less

  8. Reaction Rate Theory in Coordination Number Space: An Application to Ion Solvation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Santanu; Baer, Marcel D.; Mundy, Christopher J.

    2016-04-14

    Understanding reaction mechanisms in many chemical and biological processes require application of rare event theories. In these theories, an effective choice of a reaction coordinate to describe a reaction pathway is essential. To this end, we study ion solvation in water using molecular dynamics simulations and explore the utility of coordination number (n = number of water molecules in the first solvation shell) as the reaction coordinate. Here we compute the potential of mean force (W(n)) using umbrella sampling, predicting multiple metastable n-states for both cations and anions. We find with increasing ionic size, these states become more stable andmore » structured for cations when compared to anions. We have extended transition state theory (TST) to calculate transition rates between n-states. TST overestimates the rate constant due to solvent-induced barrier recrossings that are not accounted for. We correct the TST rates by calculating transmission coefficients using the reactive flux method. This approach enables a new way of understanding rare events involving coordination complexes. We gratefully acknowledge Liem Dang and Panos Stinis for useful discussion. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. SR, CJM, and GKS were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy.« less

  9. Designing a Virtual Research Facility to motivate Professional-Citizen Collaboration

    NASA Astrophysics Data System (ADS)

    Gay, Pamela

    In order to handle the onslaught of data spilling from telescopes on the Earth and on orbit, CosmoQuest has created a virtual research facility that allows the public to collaborate with science teams on projects that would otherwise lack the necessary human resources. This second-generation citizen science site goes beyond asking people to click on images to also engaging them in taking classes, attending virtual seminars, and participating in virtual star parties. These features were introduced to try and expand the diversity of motivations that bring people to the project and to keep them engaged overtime - just as a research center seeks to bring a diversity of people together to work and learn over time. In creating the CosmoQuest Virtual Research Facility, we sought to answer the question, “What would happen if we provided the public with the same kinds of facilities scientists have, and invite them to be our collaborators?” It had already been observed that the public readily attends public science lectures, open houses at science facilities, and education programs such as star parties. It was hoped that by creating a central facility, we could build a community of people learning and doing science in a productive manner. In order to be successful, we needed to first create the facility, then test if people were coming both to learn and to do science, and finally to verify that people were doing legitimate science. During the past 18 months of operations, we have continued to work through each of these stages, as discussed talk. At this early date, progress is on-going, and much research remains to be done, but all indications show that we are on our way to building a community of people learning and doing science. During 2013-2014, a series of studies looked at the motivations of CosmoQuest users, as well as their forms of site interactions. During this talk, we will review these results, as well as the demographics of our user population.

  10. Fundamental Science with Pulsed Power: Research Opportunities and User Meeting.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattsson, Thomas Kjell Rene; Wootton, Alan James; Sinars, Daniel Brian

    The fifth Fundamental Science with Pulsed Power: Research Opportunities and User Meeting was held in Albuquerque, NM, July 20-­23, 2014. The purpose of the workshop was to bring together leading scientists in four research areas with active fundamental science research at Sandia’s Z facility: Magnetized Liner Inertial Fusion (MagLIF), Planetary Science, Astrophysics, and Material Science. The workshop was focused on discussing opportunities for high-­impact research using Sandia’s Z machine, a future 100 GPa class facility, and possible topics for growing the academic (off-Z-campus) science relevant to the Z Fundamental Science Program (ZFSP) and related projects in astrophysics, planetary science, MagLIF-more » relevant magnetized HED science, and materials science. The user meeting was for Z collaborative users to: a) hear about the Z accelerator facility status and plans, b) present the status of their research, and c) be provided with a venue to meet and work as groups. Following presentations by Mark Herrmann and Joel Lash on the fundamental science program on Z and the status of the Z facility where plenary sessions for the four research areas. The third day of the workshop was devoted to breakout sessions in the four research areas. The plenary-­ and breakout sessions were for the four areas organized by Dan Sinars (MagLIF), Dylan Spaulding (Planetary Science), Don Winget and Jim Bailey (Astrophysics), and Thomas Mattsson (Material Science). Concluding the workshop were an outbrief session where the leads presented a summary of the discussions in each working group to the full workshop. A summary of discussions and conclusions from each of the research areas follows and the outbrief slides are included as appendices.« less

  11. The medical science DMZ: a network design pattern for data-intensive medical science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Dart, Eli; Barnett, William

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations.High-end networking, packet-filter firewalls, network intrusion-detection systems.We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs.The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and networkmore » resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows.By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.« less

  12. The medical science DMZ: a network design pattern for data-intensive medical science.

    PubMed

    Peisert, Sean; Dart, Eli; Barnett, William; Balas, Edward; Cuff, James; Grossman, Robert L; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2017-10-06

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet-filter firewalls, network intrusion-detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  13. The hills are alive: Earth surface dynamics in the University of Arizona Landscape Evolution Observatory

    NASA Astrophysics Data System (ADS)

    DeLong, S.; Troch, P. A.; Barron-Gafford, G. A.; Huxman, T. E.; Pelletier, J. D.; Dontsova, K.; Niu, G.; Chorover, J.; Zeng, X.

    2012-12-01

    To meet the challenge of predicting landscape-scale changes in Earth system behavior, the University of Arizona has designed and constructed a new large-scale and community-oriented scientific facility - the Landscape Evolution Observatory (LEO). The primary scientific objectives are to quantify interactions among hydrologic partitioning, geochemical weathering, ecology, microbiology, atmospheric processes, and geomorphic change associated with incipient hillslope development. LEO consists of three identical, sloping, 333 m2 convergent landscapes inside a 5,000 m2 environmentally controlled facility. These engineered landscapes contain 1 meter of basaltic tephra ground to homogenous loamy sand and contains a spatially dense sensor and sampler network capable of resolving meter-scale lateral heterogeneity and sub-meter scale vertical heterogeneity in moisture, energy and carbon states and fluxes. Each ~1000 metric ton landscape has load cells embedded into the structure to measure changes in total system mass with 0.05% full-scale repeatability (equivalent to less than 1 cm of precipitation), to facilitate better quantification of evapotraspiration. Each landscape has an engineered rain system that allows application of precipitation at rates between3 and 45 mm/hr. These landscapes are being studied in replicate as "bare soil" for an initial period of several years. After this initial phase, heat- and drought-tolerant vascular plant communities will be introduced. Introduction of vascular plants is expected to change how water, carbon, and energy cycle through the landscapes, with potentially dramatic effects on co-evolution of the physical and biological systems. LEO also provides a physical comparison to computer models that are designed to predict interactions among hydrological, geochemical, atmospheric, ecological and geomorphic processes in changing climates. These computer models will be improved by comparing their predictions to physical measurements made in LEO. The main focus of our iterative modeling and measurement discovery cycle is to use rapid data assimilation to facilitate validation of newly coupled open-source Earth systems models. LEO will be a community resource for Earth system science research, education, and outreach. The LEO project operational philosophy includes 1) open and real-time availability of sensor network data, 2) a framework for community collaboration and facility access that includes integration of new or comparative measurement capabilities into existing facility cyberinfrastructure, 3) community-guided science planning and 4) development of novel education and outreach programs.Artistic rendering of the University of Arizona Landscape Evolution Observatory

  14. The Australian Replacement Research Reactor

    NASA Astrophysics Data System (ADS)

    Kennedy, Shane; Robinson, Robert

    2004-03-01

    The 20-MW Australian Replacement Research Reactor represents possibly the greatest single research infrastructure investment in Australia's history. Construction of the facility has commenced, following award of the construction contract in July 2000, and the construction licence in April 2002. The project includes a large state-of-the-art liquid deuterium cold-neutron source and supermirror guides feeding a large modern guide hall, in which most of the instruments are placed. Alongside the guide hall, there is good provision of laboratory, office and space for support activities. While the facility has "space" for up to 18 instruments, the project has funding for an initial set of 8 instruments, which will be ready when the reactor is fully operational in July 2006. Instrument performance will be competitive with the best research-reactor facilities anywhere, and our goal is to be in the top 3 such facilities worldwide. Staff to lead the design effort and man these instruments have been hired on the international market from leading overseas facilities, and from within Australia, and 7 out of 8 instruments have been specified and costed. At present the instrumentation project carries 10contingency. An extensive dialogue has taken place with the domestic user community and our international peers, via various means including a series of workshops over the last 2 years covering all 8 instruments, emerging areas of application like biology and the earth sciences, and computing infrastructure for the instruments.

  15. CosmoQuest: Training Educators and Engaging Classrooms in Citizen Science through a Virtual Research Facility

    NASA Astrophysics Data System (ADS)

    Buxner, Sanlyn; Bracey, Georgia; Summer, Theresa; Cobb, Whitney; Gay, Pamela L.; Finkelstein, Keely D.; Gurton, Suzanne; Felix-Strishock, Lisa; Kruse, Brian; Lebofsky, Larry A.; Jones, Andrea J.; Tweed, Ann; Graff, Paige; Runco, Susan; Noel-Storr, Jacob; CosmoQuest Team

    2016-10-01

    CosmoQuest is a Citizen Science Virtual Research Facility that engages scientists, educators, students, and the public in analyzing NASA images. Often, these types of citizen science activities target enthusiastic members of the public, and additionally engage students in K-12 and college classrooms. To support educational engagement, we are developing a pipeline in which formal and informal educators and facilitators use the virtual research facility to engage students in real image analysis that is framed to provide meaningful science learning. This work also contributes to the larger project to produce publishable results. Community scientists are being solicited to propose CosmoQuest Science Projects take advantage of the virtual research facility capabilities. Each CosmoQuest Science Project will result in formal education materials, aligned with Next Generation Science Standards including the 3-dimensions of science learning; core ideas, crosscutting concepts, and science and engineering practices. Participating scientists will contribute to companion educational materials with support from the CosmoQuest staff of data specialists and education specialists. Educators will be trained through in person and virtual workshops, and classrooms will have the opportunity to not only work with NASA data, but interface with NASA scientists. Through this project, we are bringing together subject matter experts, classrooms, and informal science organizations to share the excitement of NASA SMD science with future citizen scientists. CosmoQuest is funded through individual donations, through NASA Cooperative Agreement NNX16AC68A, and through additional grants and contracts that are listed on our website, cosmoquest.org.

  16. Computer Science | Classification | College of Engineering & Applied

    Science.gov Websites

    EMS 1011 profile photo Adrian Dumitrescu, Ph.D.ProfessorComputer Science(414) 229-4265Eng & Math @uwm.eduEng & Math Sciences 919 profile photo Hossein Hosseini, Ph.D.ProfessorComputer Science(414) 229 -5184hosseini@uwm.eduEng & Math Sciences 1091 profile photo Amol Mali, Ph.D.Associate ProfessorComputer

  17. Computers in Science Education: Can They Go Far Enough? Have We Gone Too Far?

    ERIC Educational Resources Information Center

    Schrock, John Richard

    1984-01-01

    Indicates that although computers may churn out creative research, science is still dependent on science education, and that science education consists of increasing human experience. Also considers uses and misuses of computers in the science classroom, examining Edgar Dale's "cone of experience" related to laboratory computer and "extended…

  18. Ammonia Oxidation by Abstraction of Three Hydrogen Atoms from a Mo–NH 3 Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Heiden, Zachariah M.; Wiedner, Eric S.

    We report ammonia oxidation by homolytic cleavage of all three H atoms from a Mo-15NH3 complex using the 2,4,6-tri-tert-butylphenoxyl radical to afford a Mo-alkylimido (Mo=15NR) complex (R = 2,4,6-tri-t-butylcyclohexa-2,5-dien-1-one). Reductive cleavage of Mo=15NR generates a terminal Mo≡N nitride, and a [Mo-15NH]+ complex is formed by protonation. Computational analysis describes the energetic profile for the stepwise removal of three H atoms from the Mo-15NH3 complex and the formation of Mo=15NR. Acknowledgment. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Re-search Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Officemore » of Basic Energy Sciences. EPR and mass spectrometry experiments were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at PNNL. The authors thank Dr. Eric D. Walter and Dr. Rosalie Chu for assistance in performing EPR and mass spectroscopy analysis, respectively. Computational resources provided by the National Energy Re-search Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific North-west National Laboratory is operated by Battelle for the U.S. DOE.« less

  19. An Astrometric Facility For Planetary Detection On The Space Station

    NASA Astrophysics Data System (ADS)

    Nishioka, Kenji; Scargle, Jeffrey D.; Givens, John J.

    1987-09-01

    An Astrometric Telescope Facility (ATF) for planetary detection is being studied as a potential Space Station initial operating capability payload. The primary science objective of this mission is the detection and study of planetary systems around other stars. In addition, the facility will be capable of other astrometric measurements such as stellar motions of other galaxies and highly precise direct measurement of stellar distances within the Milky Way Galaxy. This paper summarizes the results of a recently completed ATF preliminary systems definition study. Results of this study indicate that the preliminary concept for the facility is fully capable of meeting the science objectives without the development of any new technologies. This preliminary systems study started with the following basic assumptions: 1) the facility will be placed in orbit by a single Shuttle launch, 2) the Space Station will provide a coarse pointing system , electrical power, communications, assembly and checkout, maintenance and refurbishment services, and 3) the facility will be operated from a ground facility. With these assumptions and the science performance requirements a preliminary "strawman" facility was designed. The strawman facility design with a prime-focus telescope of 1.25-m aperture, f-ratio of 13 and a single prime-focus instrument was chosen to minimize random and systemmatic errors. Total facility mass is 5100 kg and overall dimensions are 1.85-m diam by 21.5-m long. A simple straightforward operations approach has been developed for ATF. A real-time facility control is not normally required, but does maintain a near real-time ground monitoring capability for facility and science data stream on a full-time basis. Facility observational sequences are normally loaded once a week. In addition, the preliminary system is designed to be fail-safe and single-fault tolerant. Routine interactions by the Space Station crew with ATF will not be necessary, but onboard controls are provided for crew override as required for emergencies and maintenance.

  20. 2008 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    2009-12-07

    The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P systemmore » at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.« less

  1. Approach to sustainable e-Infrastructures - The case of the Latin American Grid

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Diacovo, Ramon; Brasileiro, Francisco; Carvalho, Diego; Dutra, Inês; Faerman, Marcio; Gavillet, Philippe; Hoeger, Herbert; Lopez Pourailly, Maria Jose; Marechal, Bernard; Garcia, Rafael Mayo; Neumann Ciuffo, Leandro; Ramos Pollan, Paul; Scardaci, Diego; Stanton, Michael

    2010-05-01

    The EELA (E-Infrastructure shared between Europe and Latin America) and EELA-2 (E-science grid facility for Europe and Latin America) projects, co-funded by the European Commission under FP6 and FP7, respectively, have been successful in building a high capacity, production-quality, scalable Grid Facility for a wide spectrum of applications (e.g. Earth & Life Sciences, High energy physics, etc.) from several European and Latin American User Communities. This paper presents the 4-year experience of EELA and EELA-2 in: • Providing each Member Institution the unique opportunity to benefit of a huge distributed computing platform for its research activities, in particular through initiatives such as OurGrid which proposes a so-called Opportunistic Grid Computing well adapted to small and medium Research Laboratories such as most of those of Latin America and Africa; • Developing a realistic strategy to ensure the long-term continuity of the e-Infrastructure in the Latin American continent, beyond the term of the EELA-2 project, in association with CLARA and collaborating with EGI. Previous interactions between EELA and African Grid members at events such as the IST Africa'07, 08 and 09, the International Conference on Open Access'08 and EuroAfriCa-ICT'08, to which EELA and EELA-2 contributed, have shown that the e-Infrastructure situation in Africa compares well with the Latin American one. This means that African Grids are likely to face the same problems that EELA and EELA-2 experienced, especially in getting the necessary User and Decision Makers support to create NGIs and, later, a possible continent-wide African Grid Initiative (AGI). The hope is that the EELA-2 endeavour towards sustainability as described in this presentation could help the progress of African Grids.

  2. Future Computer Requirements for Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  3. Conceptual design of the Space Station combustion module

    NASA Technical Reports Server (NTRS)

    Morilak, Daniel P.; Rohn, Dennis W.; Rhatigan, Jennifer L.

    1994-01-01

    The purpose of this paper is to describe the conceptual design of the Combustion Module for the International Space Station Alpha (ISSA). This module is part of the Space Station Fluids/Combustion Facility (SS FCF) under development at the NASA Lewis Research Center. The Fluids/Combustion Facility is one of several science facilities which are being developed to support microgravity science investigations in the US Laboratory Module of the ISSA. The SS FCF will support a multitude of fluids and combustion science investigations over the lifetime of the ISSA and return state-of-the-art science data in a timely and efficient manner to the scientific communities. This will be accomplished through modularization of hardware, with planned, periodic upgrades; modularization of like scientific investigations that make use of common facility functions; and through the use of orbital replacement units (ORU's) for incorporation of new technology and new functionality. The SS FCF is scheduled to become operational on-orbit in 1999. The Combustion Module is presently scheduled for launch to orbit and integration with the Fluids/Combustion Facility in 1999. The objectives of this paper are to describe the history of the Combustion Module concept, the types of combustion science investigations which will be accommodated by the module, the hardware design heritage, the hardware concept, and the hardware breadboarding efforts currently underway.

  4. Conceptual Design of the Space Station Fluids Module

    NASA Technical Reports Server (NTRS)

    Rohn, Dennis W.; Morilak, Daniel P.; Rhatigan, Jennifer L.; Peterson, Todd T.

    1994-01-01

    The purpose of this paper is to describe the conceptual design of the Fluids Module for the International Space Station Alpha (ISSA). This module is part of the Space Station Fluids/Combustion Facility (SS FCF) under development at the NASA Lewis Research Center. The Fluids/Combustion Facility is one of several science facilities which are being developed to support microgravity science investigations in the US Laboratory Module of the ISSA. The SS FCF will support a multitude of fluids and combustion science investigations over the lifetime of the ISSA and return state-of-the-art science data in a timely and efficient manner to the scientific communities. This will be accomplished through modularization of hardware, with planned, periodic upgrades; modularization of like scientific investigations that make use of common facility functions; and use of orbital replacement units (ORU's) for incorporation of new technology and new functionality. Portions of the SS FCF are scheduled to become operational on-orbit in 1999. The Fluids Module is presently scheduled for launch to orbit and integration with the Fluids/Combustion Facility in 2001. The objectives of this paper are to describe the history of the Fluids Module concept, the types of fluids science investigations which will be accommodated by the module, the hardware design heritage, the hardware concept, and the hardware breadboarding efforts currently underway.

  5. Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period April, 1986 through September 30, 1986 is summarized.

  6. 78 FR 10180 - Annual Computational Science Symposium; Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-13

    ...] Annual Computational Science Symposium; Conference AGENCY: Food and Drug Administration, HHS. ACTION... Computational Science Symposium.'' The purpose of the conference is to help the broader community align and share experiences to advance computational science. At the conference, which will bring together FDA...

  7. Integration and use of Microgravity Research Facility: Lessons learned by the crystals by vapor transport experiment and Space Experiments Facility programs

    NASA Technical Reports Server (NTRS)

    Heizer, Barbara L.

    1992-01-01

    The Crystals by Vapor Transport Experiment (CVTE) and Space Experiments Facility (SEF) are materials processing facilities designed and built for use on the Space Shuttle mid deck. The CVTE was built as a commercial facility owned by the Boeing Company. The SEF was built under contract to the UAH Center for Commercial Development of Space (CCDS). Both facilities include up to three furnaces capable of reaching 850 C minimum, stand-alone electronics and software, and independent cooling control. In addition, the CVTE includes a dedicated stowage locker for cameras, a laptop computer, and other ancillary equipment. Both systems are designed to fly in a Middeck Accommodations Rack (MAR), though the SEF is currently being integrated into a Spacehab rack. The CVTE hardware includes two transparent furnaces capable of achieving temperatures in the 850 to 870 C range. The transparent feature allows scientists/astronauts to directly observe and affect crystal growth both on the ground and in space. Cameras mounted to the rack provide photodocumentation of the crystal growth. The basic design of the furnace allows for modification to accommodate techniques other than vapor crystal growth. Early in the CVTE program, the decision was made to assign a principal scientist to develop the experiment plan, affect the hardware/software design, run the ground and flight research effort, and interface with the scientific community. The principal scientist is responsible to the program manager and is a critical member of the engineering development team. As a result of this decision, the hardware/experiment requirements were established in such a way as to balance the engineering and science demands on the equipment. Program schedules for hardware development, experiment definition and material selection, flight operations development and crew training, both ground support and astronauts, were all planned and carried out with the understanding that the success of the program science was as important as the hardware functionality. How the CVTE payload was designed and what it is capable of, the philosophy of including the scientists in design and operations decisions, and the lessons learned during the integration process are descussed.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hules, John

    This 1998 annual report from the National Scientific Energy Research Computing Center (NERSC) presents the year in review of the following categories: Computational Science; Computer Science and Applied Mathematics; and Systems and Services. Also presented are science highlights in the following categories: Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Sciences; High Energy and Nuclear Physics; and Advanced Scientific Computing Research and Other Projects.

  9. Mechanistic insights into aqueous phase propanol dehydration in H-ZSM-5 zeolite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai; Lercher, Johannes A.

    Aqueous phase dehydration of 1-propanol over H-ZSM-5 zeolite was investigated using density functional theory (DFT) calculations. The water molecules in the zeolite pores prefer to aggregate via the hydrogen bonding network and be protonated at the Brønsted acidic sites (BAS). Two typical configurations, i.e., dispersed and clustered, of water molecules were identified by ab initio molecular dynamics simulation of the mimicking aqueous phase H-ZSM-5 zeolite unit cell with 20 water molecules per unit cell. DFT calculated Gibbs free energies suggest that the dimeric propanol-propanol, the propanol-water complex, and the trimeric propanol-propanol-water are formed at high propanol concentrations, which provide amore » kinetically feasible dehydration reaction channel of 1-propanol to propene. However, calculation results also indicate that the propanol dehydration via the unimolecular mechanism becomes kinetically discouraged due to the enhanced stability of the protonated dimeric propanol and the protonated water cluster acting as the BAS site for alcohol dehydration reaction. This work was supported by the US Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less

  10. Facilities Management via Computer: Information at Your Fingertips.

    ERIC Educational Resources Information Center

    Hensey, Susan

    1996-01-01

    Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)

  11. KSC-2011-6745

    NASA Image and Video Library

    2011-07-14

    CAPE CANAVERAL, Fla. -- The multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is position behind mobile plexiglass radiation shields in the high bay of the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG was returned to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The generator will remain in the RTGF until is moved to the pad for integration on the rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder

  12. Communication network for decentralized remote tele-science during the Spacelab mission IML-2

    NASA Technical Reports Server (NTRS)

    Christ, Uwe; Schulz, Klaus-Juergen; Incollingo, Marco

    1994-01-01

    The ESA communication network for decentralized remote telescience during the Spacelab mission IML-2, called Interconnection Ground Subnetwork (IGS), provided data, voice conferencing, video distribution/conferencing and high rate data services to 5 remote user centers in Europe. The combination of services allowed the experimenters to interact with their experiments as they would normally do from the Payload Operations Control Center (POCC) at MSFC. In addition, to enhance their science results, they were able to make use of reference facilities and computing resources in their home laboratory, which typically are not available in the POCC. Characteristics of the IML-2 communications implementation were the adaptation to the different user needs based on modular service capabilities of IGS and the cost optimization for the connectivity. This was achieved by using a combination of traditional leased lines, satellite based VSAT connectivity and N-ISDN according to the simulation and mission schedule for each remote site. The central management system of IGS allows minimization of staffing and the involvement of communications personnel at the remote sites. The successful operation of IGS for IML-2 as a precursor network for the Columbus Orbital Facility (COF) has proven the concept for communications to support the operation of the COF decentralized scenario.

  13. KSC-2011-6684

    NASA Image and Video Library

    2011-07-12

    CAPE CANAVERAL, Fla. -- Workers dressed in clean room attire, known as bunny suits, transfer the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission on its holding base from the airlock of the Payload Hazardous Servicing Facility (PHSF) into the facility's high bay. In the high bay, the MMRTG temporarily will be installed on the MSL rover, Curiosity, for a fit check but will be installed on the rover for launch at the pad. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is planned for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Cory Huston

  14. Enduring Influence of Stereotypical Computer Science Role Models on Women's Academic Aspirations

    ERIC Educational Resources Information Center

    Cheryan, Sapna; Drury, Benjamin J.; Vichayapai, Marissa

    2013-01-01

    The current work examines whether a brief exposure to a computer science role model who fits stereotypes of computer scientists has a lasting influence on women's interest in the field. One-hundred undergraduate women who were not computer science majors met a female or male peer role model who embodied computer science stereotypes in appearance…

  15. A Web of Resources for Introductory Computer Science.

    ERIC Educational Resources Information Center

    Rebelsky, Samuel A.

    As the field of Computer Science has grown, the syllabus of the introductory Computer Science course has changed significantly. No longer is it a simple introduction to programming or a tutorial on computer concepts and applications. Rather, it has become a survey of the field of Computer Science, touching on a wide variety of topics from digital…

  16. Overview: Development of the National Ignition Facility and the Transition to a User Facility for the Ignition Campaign and High Energy Density Scientific Research

    DOE PAGES

    Moses, E. I.; Lindl, J. D.; Spaeth, M. L.; ...

    2017-03-23

    The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory has been operational since March 2009 and has been transitioning to a user facility supporting ignition science, high energy density stockpile science, national security applications, and fundamental science. The facility has achieved its design goal of 1.8 MJ and 500 TW of 3ω light on target, and has performed target experiments with 1.9 MJ at peak powers of 410 TW. The National Ignition Campaign (NIC), established by the U.S. National Nuclear Security Administration in 2005, was responsible for transitioning NIF from a construction project to a national user facility. Besidesmore » the operation and optimization of the use of the NIF laser, the NIC program was responsible for developing capabilities including target fabrication facilities; cryogenic layering capabilities; over 60 optical, X-ray, and nuclear diagnostic systems; experimental platforms; and a wide range of other NIF facility infrastructure. This study provides a summary of some of the key experimental results for NIF to date, an overview of the NIF facility capabilities, and the challenges that were met in achieving these capabilities. Finally, they are covered in more detail in the papers that follow.« less

  17. Overview: Development of the National Ignition Facility and the Transition to a User Facility for the Ignition Campaign and High Energy Density Scientific Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, E. I.; Lindl, J. D.; Spaeth, M. L.

    The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory has been operational since March 2009 and has been transitioning to a user facility supporting ignition science, high energy density stockpile science, national security applications, and fundamental science. The facility has achieved its design goal of 1.8 MJ and 500 TW of 3ω light on target, and has performed target experiments with 1.9 MJ at peak powers of 410 TW. The National Ignition Campaign (NIC), established by the U.S. National Nuclear Security Administration in 2005, was responsible for transitioning NIF from a construction project to a national user facility. Besidesmore » the operation and optimization of the use of the NIF laser, the NIC program was responsible for developing capabilities including target fabrication facilities; cryogenic layering capabilities; over 60 optical, X-ray, and nuclear diagnostic systems; experimental platforms; and a wide range of other NIF facility infrastructure. This study provides a summary of some of the key experimental results for NIF to date, an overview of the NIF facility capabilities, and the challenges that were met in achieving these capabilities. Finally, they are covered in more detail in the papers that follow.« less

  18. Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period April l, 1988 through September 30, 1988.

  19. Summary of research in applied mathematics, numerical analysis and computer science at the Institute for Computer Applications in Science and Engineering

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science during the period October 1, 1983 through March 31, 1984 is summarized.

  20. Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1986 through March 31, 1987 is summarized.

  1. High school computer science education paves the way for higher education: the Israeli case

    NASA Astrophysics Data System (ADS)

    Armoni, Michal; Gal-Ezer, Judith

    2014-07-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to computer science in high school and pursuing computing in higher education. We also examine the gender gap, in the context of high school computer science education. We show that in Israel, students who took the high-level computer science matriculation exam were more likely to pursue computing in higher education. Regarding the issue of gender, we will show that, in general, in Israel the difference between males and females who take computer science in high school is relatively small, and a larger, though still not very large difference exists only for the highest exam level. In addition, exposing females to high-level computer science in high school has more relative impact on pursuing higher education in computing.

  2. Status report of the end-to-end ASKAP software system: towards early science operations

    NASA Astrophysics Data System (ADS)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.

  3. Materials sciences programs: Fiscal year 1994

    NASA Astrophysics Data System (ADS)

    1995-04-01

    The Division of Materials Sciences is located within the DOE in the Office of Basic Energy Sciences. The Division of Materials Sciences is responsible for basic research and research facilities in strategic materials science topics of critical importance to the mission of the Department and its Strategic Plan. Materials Science is an enabling technology. The performance parameters, economics, environmental acceptability and safety of all energy generation, conversion, transmission and conservation technologies are limited by the properties and behavior of materials. The Materials Sciences programs develop scientific understanding of the synergistic relationship amongst the synthesis, processing, structure, properties, behavior, performance and other characteristics of materials. Emphasis is placed on the development of the capability to discover technologically, economically, and environmentally desirable new materials and processes, and the instruments and national user facilities necessary for achieving such progress. Materials Sciences sub-fields include physical metallurgy, ceramics, polymers, solid state and condensed matter physics, materials chemistry, surface science and related disciplines where the emphasis is on the science of materials. This report includes program descriptions for 458 research programs including 216 at 14 DOE National Laboratories, 242 research grants (233 for universities), and 9 Small Business Innovation Research (SBIR) Grants. The report is divided into eight sections. Section A contains all Laboratory projects, Section B has all contract research projects, Section C has projects funded under the SBIR Program, Section D describes the Center of Excellence for the Synthesis and Processing of Advanced Materials and E has information on major user facilities. F contains descriptions of other user facilities; G, a summary of funding levels; and H, indices characterizing research projects.

  4. Materials sciences programs, fiscal year 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-04-01

    The Division of Materials Sciences is located within the DOE in the Office of Basic Energy Sciences. The Division of Materials Sciences is responsible for basic research and research facilities in strategic materials science topics of critical importance to the mission of the Department and its Strategic Plan. Materials Science is an enabling technology. The performance parameters, economics, environmental acceptability and safety of all energy generation, conversion, transmission and conservation technologies are limited by the properties and behavior of materials. The Materials Sciences programs develop scientific understanding of the synergistic relationship amongst the synthesis, processing, structure, properties, behavior, performance andmore » other characteristics of materials. Emphasis is placed on the development of the capability to discover technologically, economically, and environmentally desirable new materials and processes, and the instruments and national user facilities necessary for achieving such progress. Materials Sciences sub-fields include physical metallurgy, ceramics, polymers, solid state and condensed matter physics, materials chemistry, surface science and related disciplines where the emphasis is on the science of materials. This report includes program descriptions for 458 research programs including 216 at 14 DOE National Laboratories, 242 research grants (233 for universities), and 9 Small Business Innovation Research (SBIR) Grants. The report is divided into eight sections. Section A contains all Laboratory projects, Section B has all contract research projects, Section C has projects funded under the SBIR Program, Section D describes the Center of Excellence for the Synthesis and Processing of Advanced Materials and E has information on major user facilities. F contains descriptions of other user facilities; G, a summary of funding levels; and H, indices characterizing research projects.« less

  5. Defining Computational Thinking for Mathematics and Science Classrooms

    NASA Astrophysics Data System (ADS)

    Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri

    2016-02-01

    Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new urgency has come to the challenge of defining computational thinking and providing a theoretical grounding for what form it should take in school science and mathematics classrooms. This paper presents a response to this challenge by proposing a definition of computational thinking for mathematics and science in the form of a taxonomy consisting of four main categories: data practices, modeling and simulation practices, computational problem solving practices, and systems thinking practices. In formulating this taxonomy, we draw on the existing computational thinking literature, interviews with mathematicians and scientists, and exemplary computational thinking instructional materials. This work was undertaken as part of a larger effort to infuse computational thinking into high school science and mathematics curricular materials. In this paper, we argue for the approach of embedding computational thinking in mathematics and science contexts, present the taxonomy, and discuss how we envision the taxonomy being used to bring current educational efforts in line with the increasingly computational nature of modern science and mathematics.

  6. Marine Science Initiative at South Carolina State College: An Investigation of the Biosensing Parameters Regulating Bacterial and Larval Attachment on Substrata

    DTIC Science & Technology

    1993-08-12

    State College would provide the educational facilities and SCWMRD would provide the initial research facilities. Research and teaching would be conducted...by ti"lizing the 52 ft. R/V ALila as a teaching platform for short cruises in Charleston Harbor. In addition, a marine science career day would be...held to expose students to careers in marine science. 3. To have appropriate SCWMRD scientists teach courses in topics related to marine science for

  7. Infrared Astrophysics in the SOFIA Era - An Overview

    NASA Astrophysics Data System (ADS)

    Yorke, Harold W.

    2018-06-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) provides the international astronomical community access to a broad range of instrumentation that covers wavelengths spanning the near to far infrared. The high spectral resolution of many of these instruments in several wavelength bands is unmatched by any existing or near future planned facility. The far infrared polarization capabilities of one of its instruments, HAWC+, is also unique. Moreover, SOFIA allows for additional instrument augmentations, as new state-of-the-art photometric, spectrometric, and polarimetric capabilities have been added and are being further improved. The fact that SOFIA provides ample mass, power, computing capabilities as well as 4K cooling eases the constraints on future instrument design, technical readiness, and the instrument build to an extent not possible for space-borne missions. We will review SOFIA's current and future planned capabilities and highlight specific science areas for which the stratospheric observatory will be able to significantly advance Origins science topics.

  8. The trigger system for the external target experiment in the HIRFL cooling storage ring

    NASA Astrophysics Data System (ADS)

    Li, Min; Zhao, Lei; Liu, Jin-Xin; Lu, Yi-Ming; Liu, Shu-Bin; An, Qi

    2016-08-01

    A trigger system was designed for the external target experiment in the Cooling Storage Ring (CSR) of the Heavy Ion Research Facility in Lanzhou (HIRFL). Considering that different detectors are scattered over a large area, the trigger system is designed based on a master-slave structure and fiber-based serial data transmission technique. The trigger logic is organized in hierarchies, and flexible reconfiguration of the trigger function is achieved based on command register access or overall field-programmable gate array (FPGA) logic on-line reconfiguration controlled by remote computers. We also conducted tests to confirm the function of the trigger electronics, and the results indicate that this trigger system works well. Supported by the National Natural Science Foundation of China (11079003), the Knowledge Innovation Program of the Chinese Academy of Sciences (KJCX2-YW-N27), and the CAS Center for Excellence in Particle Physics (CCEPP).

  9. Leveraging the national cyberinfrastructure for biomedical research.

    PubMed

    LeDuc, Richard; Vaughn, Matthew; Fonner, John M; Sullivan, Michael; Williams, James G; Blood, Philip D; Taylor, James; Barnett, William

    2014-01-01

    In the USA, the national cyberinfrastructure refers to a system of research supercomputer and other IT facilities and the high speed networks that connect them. These resources have been heavily leveraged by scientists in disciplines such as high energy physics, astronomy, and climatology, but until recently they have been little used by biomedical researchers. We suggest that many of the 'Big Data' challenges facing the medical informatics community can be efficiently handled using national-scale cyberinfrastructure. Resources such as the Extreme Science and Discovery Environment, the Open Science Grid, and Internet2 provide economical and proven infrastructures for Big Data challenges, but these resources can be difficult to approach. Specialized web portals, support centers, and virtual organizations can be constructed on these resources to meet defined computational challenges, specifically for genomics. We provide examples of how this has been done in basic biology as an illustration for the biomedical informatics community.

  10. Leveraging the national cyberinfrastructure for biomedical research

    PubMed Central

    LeDuc, Richard; Vaughn, Matthew; Fonner, John M; Sullivan, Michael; Williams, James G; Blood, Philip D; Taylor, James; Barnett, William

    2014-01-01

    In the USA, the national cyberinfrastructure refers to a system of research supercomputer and other IT facilities and the high speed networks that connect them. These resources have been heavily leveraged by scientists in disciplines such as high energy physics, astronomy, and climatology, but until recently they have been little used by biomedical researchers. We suggest that many of the ‘Big Data’ challenges facing the medical informatics community can be efficiently handled using national-scale cyberinfrastructure. Resources such as the Extreme Science and Discovery Environment, the Open Science Grid, and Internet2 provide economical and proven infrastructures for Big Data challenges, but these resources can be difficult to approach. Specialized web portals, support centers, and virtual organizations can be constructed on these resources to meet defined computational challenges, specifically for genomics. We provide examples of how this has been done in basic biology as an illustration for the biomedical informatics community. PMID:23964072

  11. Heliophysics Legacy Data Restoration

    NASA Astrophysics Data System (ADS)

    Candey, R. M.; Bell, E. V., II; Bilitza, D.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Grayzeck, E. J.; Harris, B. T.; Hills, H. K.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; McCaslin, P. W.; McGuire, R. E.; Papitashvili, N. E.; Rhodes, S. A.; Roberts, D. A.; Yurow, R. E.

    2016-12-01

    The Space Physics Data Facility (SPDF) , in collaboration with the National Space Science Data Coordinated Archive (NSSDCA), is converting datasets from older NASA missions to online storage. Valuable science is still buried within these datasets, particularly by applying modern algorithms on computers with vastly more storage and processing power than available when originally measured, and when analyzed in conjunction with other data and models. The data were also not readily accessible as archived on 7- and 9-track tapes, microfilm and microfiche and other media. Although many datasets have now been moved online in formats that are readily analyzed, others will still require some deciphering to puzzle out the data values and scientific meaning. There is an ongoing effort to convert the datasets to a modern Common Data Format (CDF) and add metadata for use in browse and analysis tools such as CDAWeb .

  12. Satellite remote sensing for hydrology and water management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, E.C.; Power, C.H.; Micallef, A.

    Interest in satellite remote sensing is fast moving away from pure science and individual case studies towards truly operational applications. At the same time the micro-computer revolution is ensuring that data reception and processing facilities need no longer be the preserve of a small number of global centers, but can be common-place installations in smaller countries and even local regional agency offices or laboratories. As remote sensing matures, and its applications proliferate, a new type of treatment is required to ensure both that decision makers, managers and engineers with problems to solve are informed of today's opportunities and that scientistsmore » are provided with integrated overviews of the ever-growing need for their services. This book addresses these needs uniquely focusing on the area bounded by satellite remote sensing, pure and applied hydrological sciences, and a specific world region, namely the Mediterranean basin.« less

  13. EUROPLANET-RI modelling service for the planetary science community: European Modelling and Data Analysis Facility (EMDAF)

    NASA Astrophysics Data System (ADS)

    Khodachenko, Maxim; Miller, Steven; Stoeckler, Robert; Topf, Florian

    2010-05-01

    Computational modeling and observational data analysis are two major aspects of the modern scientific research. Both appear nowadays under extensive development and application. Many of the scientific goals of planetary space missions require robust models of planetary objects and environments as well as efficient data analysis algorithms, to predict conditions for mission planning and to interpret the experimental data. Europe has great strength in these areas, but it is insufficiently coordinated; individual groups, models, techniques and algorithms need to be coupled and integrated. Existing level of scientific cooperation and the technical capabilities for operative communication, allow considerable progress in the development of a distributed international Research Infrastructure (RI) which is based on the existing in Europe computational modelling and data analysis centers, providing the scientific community with dedicated services in the fields of their computational and data analysis expertise. These services will appear as a product of the collaborative communication and joint research efforts of the numerical and data analysis experts together with planetary scientists. The major goal of the EUROPLANET-RI / EMDAF is to make computational models and data analysis algorithms associated with particular national RIs and teams, as well as their outputs, more readily available to their potential user community and more tailored to scientific user requirements, without compromising front-line specialized research on model and data analysis algorithms development and software implementation. This objective will be met through four keys subdivisions/tasks of EMAF: 1) an Interactive Catalogue of Planetary Models; 2) a Distributed Planetary Modelling Laboratory; 3) a Distributed Data Analysis Laboratory, and 4) enabling Models and Routines for High Performance Computing Grids. Using the advantages of the coordinated operation and efficient communication between the involved computational modelling, research and data analysis expert teams and their related research infrastructures, EMDAF will provide a 1) flexible, 2) scientific user oriented, 3) continuously developing and fast upgrading computational and data analysis service to support and intensify the European planetary scientific research. At the beginning EMDAF will create a set of demonstrators and operational tests of this service in key areas of European planetary science. This work will aim at the following objectives: (a) Development and implementation of tools for distant interactive communication between the planetary scientists and computing experts (including related RIs); (b) Development of standard routine packages, and user-friendly interfaces for operation of the existing numerical codes and data analysis algorithms by the specialized planetary scientists; (c) Development of a prototype of numerical modelling services "on demand" for space missions and planetary researchers; (d) Development of a prototype of data analysis services "on demand" for space missions and planetary researchers; (e) Development of a prototype of coordinated interconnected simulations of planetary phenomena and objects (global multi-model simulators); (f) Providing the demonstrators of a coordinated use of high performance computing facilities (super-computer networks), done in cooperation with European HPC Grid DEISA.

  14. Nicholas Brunhart-Lupo | NREL

    Science.gov Websites

    . Education Ph.D., Computer Science, Colorado School of Mines M.S., Computer Science, University of Queensland B.S., Computer Science, Colorado School of Mines Brunhart-Lupo Nicholas Brunhart-Lupo Computational Science Nicholas.Brunhart-Lupo@nrel.gov

  15. The Need for Computer Science

    ERIC Educational Resources Information Center

    Margolis, Jane; Goode, Joanna; Bernier, David

    2011-01-01

    Broadening computer science learning to include more students is a crucial item on the United States' education agenda, these authors say. Although policymakers advocate more computer science expertise, computer science offerings in high schools are few--and actually shrinking. In addition, poorly resourced schools with a high percentage of…

  16. Summary of research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1988 through March 31, 1989 is summarized.

  17. 2014 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  18. 2015 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  19. Towards a unified picture of the water self-ions at the air-water interface: a density functional theory perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Marcel D.; Kuo, I-F W.; Tobias, Douglas J.

    2014-07-17

    The propensities of the water self ions, H3O+ and OH- , for the air-water interface has implications for interfacial acid-base chemistry. Despite numerous experimental and computational studies, no consensus has been reached on the question of whether or not H3O+ and/or OH- prefer to be at the water surface or in the bulk. Here we report a molecular dynamics simulation study of the bulk vs. interfacial behavior of H3O+ and OH- that employs forces derived from density functional theory with a generalized gradient approximation exchangecorrelation functional (specifically, BLYP) and empirical dispersion corrections. We computed the potential of mean force (PMF)more » for H3O+ as a function of the position of the ion in a 215-molecule water slab. The PMF is flat, suggesting that H3O+ has equal propensity for the air-water interface and the bulk. We compare the PMF for H3O+ to our previously computed PMF for OH- adsorption, which contains a shallow minimum at the interface, and we explore how differences in solvation of each ion at the interface vs. the bulk are connected with interfacial propensity. We find that the solvation shell of H3O+ is only slightly dependent on its position in the water slab, while OH- partially desolvates as it approaches the interface, and we examine how this difference in solvation behavior is manifested in the electronic structure and chemistry of the two ions. DJT was supported by National Science Foundation grant CHE-0909227. CJM was supported by the U.S. Department of Energy‘s (DOE) Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is operated for the Department of Energy by Battelle. The potential of mean force required resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DEAC05-00OR22725. The remaining simulations and analysis used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. at at Lawrence Berkeley National Laboratory. MDB is grateful for the support of the Linus Pauling Distinguished Postdoctoral Fellowship Program at PNNL.« less

  20. Alliance for Computational Science Collaboration HBCU Partnership at Fisk University. Final Report 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, W. E.

    2004-08-16

    Computational Science plays a big role in research and development in mathematics, science, engineering and biomedical disciplines. The Alliance for Computational Science Collaboration (ACSC) has the goal of training African-American and other minority scientists in the computational science field for eventual employment with the Department of Energy (DOE). The involvements of Historically Black Colleges and Universities (HBCU) in the Alliance provide avenues for producing future DOE African-American scientists. Fisk University has been participating in this program through grants from the DOE. The DOE grant supported computational science activities at Fisk University. The research areas included energy related projects, distributed computing,more » visualization of scientific systems and biomedical computing. Students' involvement in computational science research included undergraduate summer research at Oak Ridge National Lab, on-campus research involving the participation of undergraduates, participation of undergraduate and faculty members in workshops, and mentoring of students. These activities enhanced research and education in computational science, thereby adding to Fisk University's spectrum of research and educational capabilities. Among the successes of the computational science activities are the acceptance of three undergraduate students to graduate schools with full scholarships beginning fall 2002 (one for master degree program and two for Doctoral degree program).« less

Top