DOE Office of Scientific and Technical Information (OSTI.GOV)
Drewmark Communications; Sartor, Dale; Wilson, Mark
2010-07-01
High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.
High-Performance Computing Data Center | Energy Systems Integration
Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing
Facilities | Integrated Energy Solutions | NREL
strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewett, R.
1997-12-31
This paper describes the strategy and computer processing system that NREL, the Virginia Department of Mines, Minerals and Energy (DMME) and the state energy office, are developing for computing solar attractiveness scores for state agencies and the individual facilities or buildings within each agency. In the case of an agency, solar attractiveness is a measure of that agency`s having a significant number of facilities for which solar has the potential to be promising. In the case of a facility, solar attractiveness is a measure of its potential for being good, economically viable candidate for a solar waste heating system. Virginiamore » State agencies are charged with reducing fossil energy and electricity use and expense. DMME is responsible for working with them to achieve the goals and for managing the state`s energy consumption and cost monitoring program. This is done using the Fast Accounting System for Energy Reporting (FASER) computerized energy accounting and tracking system and database. Agencies report energy use and expenses (by individual facility and energy type) to DMME quarterly. DMME is also responsible for providing technical and other assistance services to agencies and facilities interested in investigating use of solar. Since Virginia has approximately 80 agencies operating over 8,000 energy-consuming facilities and since DMME`s resources are limited, it is interested in being able to determine: (1) on which agencies to focus; (2) specific facilities on which to focus within each high-priority agency; and (3) irrespective of agency, which facilities are the most promising potential candidates for solar. The computer processing system described in this paper computes numerical solar attractiveness scores for the state`s agencies and the individual facilities using the energy use and cost data in the FASER system database and the state`s and NREL`s experience in implementing, testing and evaluating solar water heating systems in commercial and government facilities.« less
Energy consumption and load profiling at major airports. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, J.
1998-12-01
This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Hack, James; Riley, Katherine
The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurie, Carol
2017-02-01
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Office of Energy Efficiency and Renewable Energy
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
High-Performance Computing Data Center Efficiency Dashboard | Computational
recovery water (ERW) loop Heat exchanger for energy recovery Thermosyphon Heat exchanger between ERW loop and cooling tower loop Evaporative cooling towers Learn more about our energy-efficient facility
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X
NASA Technical Reports Server (NTRS)
1983-01-01
An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Beam Analysis Behavior of Lithium Metal across a Rigid Block Copolymer Electrolyte Membrane. Journal of the
Energy Systems Integration Facility (ESIF): Golden, CO - Energy Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheppy, Michael; VanGeet, Otto; Pless, Shanti
2015-03-01
At NREL's Energy Systems Integration Facility (ESIF) in Golden, Colo., scientists and engineers work to overcome challenges related to how the nation generates, delivers and uses energy by modernizing the interplay between energy sources, infrastructure, and data. Test facilities include a megawatt-scale ac electric grid, photovoltaic simulators and a load bank. Additionally, a high performance computing data center (HPCDC) is dedicated to advancing renewable energy and energy efficient technologies. A key design strategy is to use waste heat from the HPCDC to heat parts of the building. The ESIF boasts an annual EUI of 168.3 kBtu/ft2. This article describes themore » building's procurement, design and first year of performance.« less
NASA Astrophysics Data System (ADS)
Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.
2009-07-01
The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.
High-Performance Computing and Visualization | Energy Systems Integration
Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system
LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012
Yelick, Kathy
2018-01-24
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yelick, Kathy
2012-02-02
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012
Yelick, Kathy
2017-12-09
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
@berkeley.edu 510-642-1220 Research profile » A U.S. Department of Energy National Laboratory Operated by the Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Investigators Division Staff Facilities and Centers Staff Jobs Safety Personnel Resources Committees In Case of
Reducing cooling energy consumption in data centres and critical facilities
NASA Astrophysics Data System (ADS)
Cross, Gareth
Given the rise of our everyday reliance on computers in all walks of life, from checking the train times to paying our credit card bills online, the need for computational power is ever increasing. Other than the ever-increasing performance of home Personal Computers (PC's) this reliance has given rise to a new phenomenon in the last 10 years ago. The data centre. Data centres contain vast arrays of IT cabinets loaded with servers that perform millions of computational equations every second. It is these data centres that allow us to continue with our reliance on the internet and the PC. As more and more data centres become necessary due to the increase in computing processing power required for the everyday activities we all take for granted so the energy consumed by these data centres rises. Not only are more and more data centres being constructed daily, but operators are also looking at ways to squeeze more processing from their existing data centres. This in turn leads to greater heat outputs and therefore requires more cooling. Cooling data centres requires a sizeable energy input, indeed to many megawatts per data centre site. Given the large amounts of money dependant on the successful operation of data centres, in particular for data centres operated by financial institutions, the onus is predominantly on ensuring the data centres operate with no technical glitches rather than in an energy conscious fashion. This report aims to investigate the ways and means of reducing energy consumption within data centres without compromising the technology the data centres are designed to house. As well as discussing the individual merits of the technologies and their implementation technical calculations will be undertaken where necessary to determine the levels of energy saving, if any, from each proposal. To enable comparison between each proposal any design calculations within this report will be undertaken against a notional data facility. This data facility will nominally be considered to require 1000 kW. Refer to Section 2.1 'Outline of Notional data Facility for Calculation Purposes' for details of the design conditions and constraints of the energy consumption calculations.
Facilities | Computational Science | NREL
technology innovation by providing scientists and engineers the ability to tackle energy challenges that scientists and engineers to take full advantage of advanced computing hardware and software resources
Sea/Lake Water Air Conditioning at Naval Facilities.
1980-05-01
ECONOMICS AT TWO FACILITIES ......... ................... 2 Facilities ........... .......................... 2 Computer Models...of an operational test at Naval Security Group Activity (NSGA) Winter Harbor, Me., and the economics of Navywide application. In FY76 an assessment of... economics of Navywide application of sea/lake water AC indicated that cost and energy savings at the sites of some Naval facilities are possible, depending
ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean; Potok, Thomas E.; Jones, Todd
At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less
ESIF 2016: Modernizing Our Grid and Energy System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Becelaere, Kimberly
This 2016 annual report highlights work conducted at the Energy Systems Integration Facility (ESIF) in FY 2016, including grid modernization, high-performance computing and visualization, and INTEGRATE projects.
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Choong-Seock; Greenwald, Martin; Riley, Katherine
The additional computing power offered by the planned exascale facilities could be transformational across the spectrum of plasma and fusion research — provided that the new architectures can be efficiently applied to our problem space. The collaboration that will be required to succeed should be viewed as an opportunity to identify and exploit cross-disciplinary synergies. To assess the opportunities and requirements as part of the development of an overall strategy for computing in the exascale era, the Exascale Requirements Review meeting of the Fusion Energy Sciences (FES) community was convened January 27–29, 2016, with participation from a broad range ofmore » fusion and plasma scientists, specialists in applied mathematics and computer science, and representatives from the U.S. Department of Energy (DOE) and its major computing facilities. This report is a summary of that meeting and the preparatory activities for it and includes a wealth of detail to support the findings. Technical opportunities, requirements, and challenges are detailed in this report (and in the recent report on the Workshop on Integrated Simulation). Science applications are described, along with mathematical and computational enabling technologies. Also see http://exascaleage.org/fes/ for more information.« less
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Laboratories | Energy Systems Integration Facility | NREL
laboratories to be safely divided into multiple test stand locations (or "capability hubs") to enable Fabrication Laboratory Energy Systems High-Pressure Test Laboratory Energy Systems Integration Laboratory Energy Systems Sensor Laboratory Fuel Cell Development and Test Laboratory High-Performance Computing
Argonne's Magellan Cloud Computing Research Project
Beckman, Pete
2017-12-11
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html
Argonne's Magellan Cloud Computing Research Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, Pete
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html
Computational Science at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Romero, Nichols
2014-03-01
The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.
NASA Technical Reports Server (NTRS)
Plankey, B.
1981-01-01
A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Facilities and Centers Staff Center for X-ray Optics Patrick Naulleau Director 510-486-4529 2-432 PNaulleau
Identification of cost effective energy conservation measures
NASA Technical Reports Server (NTRS)
Bierenbaum, H. S.; Boggs, W. H.
1978-01-01
In addition to a successful program of readily implemented conservation actions for reducing building energy consumption at Kennedy Space Center, recent detailed analyses have identified further substantial savings for buildings representative of technical facilities designed when energy costs were low. The techniques employed for determination of these energy savings consisted of facility configuration analysis, power and lighting measurements, detailed computer simulations and simulation verifications. Use of these methods resulted in identification of projected energy savings as large as $330,000 a year (approximately two year break-even period) in a single building. Application of these techniques to other commercial buildings is discussed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, C.; Nabelssi, B.; Roglans-Ribas, J.
1995-04-01
This report contains the Appendices for the Analysis of Accident Sequences and Source Terms at Waste Treatment and Storage Facilities for Waste Generated by the U.S. Department of Energy Waste Management Operations. The main report documents the methodology, computational framework, and results of facility accident analyses performed as a part of the U.S. Department of Energy (DOE) Waste Management Programmatic Environmental Impact Statement (WM PEIS). The accident sequences potentially important to human health risk are specified, their frequencies are assessed, and the resultant radiological and chemical source terms are evaluated. A personal computer-based computational framework and database have been developedmore » that provide these results as input to the WM PEIS for calculation of human health risk impacts. This report summarizes the accident analyses and aggregates the key results for each of the waste streams. Source terms are estimated and results are presented for each of the major DOE sites and facilities by WM PEIS alternative for each waste stream. The appendices identify the potential atmospheric release of each toxic chemical or radionuclide for each accident scenario studied. They also provide discussion of specific accident analysis data and guidance used or consulted in this report.« less
High-Performance Computing Data Center Waste Heat Reuse | Computational
control room With heat exchangers, heat energy in the energy recovery water (ERW) loop becomes available to heat the facility's process hot water (PHW) loop. Once heated, the PHW loop supplies: Active loop in the courtyard of the ESIF's main entrance District heating loop: If additional heat is needed
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, K.L.
This document has been developed to provide guidance in the interchange of electronic CAD data with Martin Marietta Energy Systems, Inc., Oak Ridge, Tennessee. It is not meant to be as comprehensive as the existing standards and specifications, but to provide a minimum set of practices that will enhance the success of the CAD data exchange. It is now a Department of Energy (DOE) Oak Ridge Field Office requirement that Architect-Engineering (A-E) firms prepare all new drawings using a Computer Aided Design (CAD) system that is compatible with the Facility Manager`s (FM) CAD system. For Oak Ridge facilities, the CADmore » system used for facility design by the FM, Martin Marietta Energy Systems, Inc., is Intregraph. The format for interchange of CAD data for Oak Ridge facilities will be the Intergraph MicroStation/IGDS format.« less
The grand challenge of managing the petascale facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R. J.; Mathematics and Computer Science
2007-02-28
This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darrow, Ken; Hedman, Bruce
Data centers represent a rapidly growing and very energy intensive activity in commercial, educational, and government facilities. In the last five years the growth of this sector was the electric power equivalent to seven new coal-fired power plants. Data centers consume 1.5% of the total power in the U.S. Growth over the next five to ten years is expected to require a similar increase in power generation. This energy consumption is concentrated in buildings that are 10-40 times more energy intensive than a typical office building. The sheer size of the market, the concentrated energy consumption per facility, and themore » tendency of facilities to cluster in 'high-tech' centers all contribute to a potential power infrastructure crisis for the industry. Meeting the energy needs of data centers is a moving target. Computing power is advancing rapidly, which reduces the energy requirements for data centers. A lot of work is going into improving the computing power of servers and other processing equipment. However, this increase in computing power is increasing the power densities of this equipment. While fewer pieces of equipment may be needed to meet a given data processing load, the energy density of a facility designed to house this higher efficiency equipment will be as high as or higher than it is today. In other words, while the data center of the future may have the IT power of ten data centers of today, it is also going to have higher power requirements and higher power densities. This report analyzes the opportunities for CHP technologies to assist primary power in making the data center more cost-effective and energy efficient. Broader application of CHP will lower the demand for electricity from central stations and reduce the pressure on electric transmission and distribution infrastructure. This report is organized into the following sections: (1) Data Center Market Segmentation--the description of the overall size of the market, the size and types of facilities involved, and the geographic distribution. (2) Data Center Energy Use Trends--a discussion of energy use and expected energy growth and the typical energy consumption and uses in data centers. (3) CHP Applicability--Potential configurations, CHP case studies, applicable equipment, heat recovery opportunities (cooling), cost and performance benchmarks, and power reliability benefits (4) CHP Drivers and Hurdles--evaluation of user benefits, social benefits, market structural issues and attitudes toward CHP, and regulatory hurdles. (5) CHP Paths to Market--Discussion of technical needs, education, strategic partnerships needed to promote CHP in the IT community.« less
Computational study of radiation doses at UNLV accelerator facility
NASA Astrophysics Data System (ADS)
Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel
2017-09-01
A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.
Our Story | Materials Research Laboratory at UCSB: an NSF MRSEC
this site Materials Research Laboratory at UCSB: an NSF MRSEC logo Materials Research Laboratory at & Workshops Visitor Info Research IRG-1: Magnetic Intermetallic Mesostructures IRG 2: Polymeric Seminars Publications MRL Calendar Facilities Computing Energy Research Facility Microscopy &
Computational Accelerator Physics. Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisognano, J.J.; Mondelli, A.A.
1997-04-01
The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less
Autonomous Electrothermal Facility for Oil Recovery Intensification Fed by Wind Driven Power Unit
NASA Astrophysics Data System (ADS)
Belsky, Aleksey A.; Dobush, Vasiliy S.
2017-10-01
This paper describes the structure of autonomous facility fed by wind driven power unit for intensification of viscous and heavy crude oil recovery by means of heat impact on productive strata. Computer based service simulation of this facility was performed. Operational energy characteristics were obtained for various operational modes of facility. The optimal resistance of heating element of the downhole heater was determined for maximum operating efficiency of wind power unit.
Scientific Computing Strategic Plan for the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Eric Todd
Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less
Numerical simulation of long-duration blast wave evolution in confined facilities
NASA Astrophysics Data System (ADS)
Togashi, F.; Baum, J. D.; Mestreau, E.; Löhner, R.; Sunshine, D.
2010-10-01
The objective of this research effort was to investigate the quasi-steady flow field produced by explosives in confined facilities. In this effort we modeled tests in which a high explosive (HE) cylindrical charge was hung in the center of a room and detonated. The HEs used for the tests were C-4 and AFX 757. While C-4 is just slightly under-oxidized and is typically modeled as an ideal explosive, AFX 757 includes a significant percentage of aluminum particles, so long-time afterburning and energy release must be considered. The Lawrence Livermore National Laboratory (LLNL)-produced thermo-chemical equilibrium algorithm, “Cheetah”, was used to estimate the remaining burnable detonation products. From these remaining species, the afterburning energy was computed and added to the flow field. Computations of the detonation and afterburn of two HEs in the confined multi-room facility were performed. The results demonstrate excellent agreement with available experimental data in terms of blast wave time of arrival, peak shock amplitude, reverberation, and total impulse (and hence, total energy release, via either the detonation or afterburn processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim
2007-04-15
X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement.more » Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.« less
Partners | Energy Systems Integration Facility | NREL
Renewable Electricity to Grid Integration Evaluation of New Technology IGBT Industry Asetek High Performance Energy Commission High Performance Computing & Visualization Real-Time Data Collection for Institute/Schneider Electric Renewable Electricity to Grid Integration End-to-End Communication and Control
NASA Astrophysics Data System (ADS)
Eisenbach, Markus
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
High-Performance Computing Data Center Cooling System Energy Efficiency |
approaches involve a cooling distribution unit (CDU) (2), which interfaces with the facility cooling loop and to the energy recovery water (ERW) loop (5), which is a closed-loop system. There are three heat rejection options for this IT load: When possible, heat energy from the energy recovery loop is transferred
Proceedings of the nineteenth LAMPF Users Group meeting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradbury, J.N.
1986-02-01
Separate abstracts were prepared for eight invited talks on various aspects of nuclear and particle physics as well as status reports on LAMPF and discussions of upgrade options. Also included in these proceedings are the minutes of the working groups for: energetic pion channel and spectrometer; high resolution spectrometer; high energy pion channel; neutron facilities; low-energy pion work; nucleon physics laboratory; stopped muon physics; solid state physics and material science; nuclear chemistry; and computing facilities. Recent LAMPF proposals are also briefly summarized. (LEW)
How Data Becomes Physics: Inside the RACF
Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris
2018-06-22
The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energyâs (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.
BigData and computing challenges in high energy and nuclear physics
NASA Astrophysics Data System (ADS)
Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.
2017-06-01
In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''
Space technology test facilities at the NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Rodrigues, Annette T.
1990-01-01
The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.
Code of Federal Regulations, 2012 CFR
2012-07-01
... energy source will be based on the amount of thermal energy that would otherwise be used by the direct use facility in place of the geothermal resource. That amount of thermal energy (in Btu) displaced by... frequency of computing and accumulating the amount of thermal energy displaced will be determined and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... energy source will be based on the amount of thermal energy that would otherwise be used by the direct use facility in place of the geothermal resource. That amount of thermal energy (in Btu) displaced by... frequency of computing and accumulating the amount of thermal energy displaced will be determined and...
Code of Federal Regulations, 2014 CFR
2014-07-01
... energy source will be based on the amount of thermal energy that would otherwise be used by the direct use facility in place of the geothermal resource. That amount of thermal energy (in Btu) displaced by... frequency of computing and accumulating the amount of thermal energy displaced will be determined and...
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Stueber, Thomas J.; Norris, Mary Jo
1998-01-01
A Monte Carlo computational model has been developed which simulates atomic oxygen attack of protected polymers at defect sites in the protective coatings. The parameters defining how atomic oxygen interacts with polymers and protective coatings as well as the scattering processes which occur have been optimized to replicate experimental results observed from protected polyimide Kapton on the Long Duration Exposure Facility (LDEF) mission. Computational prediction of atomic oxygen undercutting at defect sites in protective coatings for various arrival energies was investigated. The atomic oxygen undercutting energy dependence predictions enable one to predict mass loss that would occur in low Earth orbit, based on lower energy ground laboratory atomic oxygen beam systems. Results of computational model prediction of undercut cavity size as a function of energy and defect size will be presented to provide insight into expected in-space mass loss of protected polymers with protective coating defects based on lower energy ground laboratory testing.
X-ray ptychography, fluorescence microscopy combo sheds new light on trace
Research Divisions Computing, Environment and Life Sciences BIOBiosciences CPSComputational Science DSLData CEESCenter for Electrochemical Energy Science CTRCenter for Transportation Research CRIChain Reaction Navigation Toggle Search Energy Environment National Security User Facilities Science Work with Us About
Farmers' Opinions about Third-Wave Technologies.
ERIC Educational Resources Information Center
Lasley, Paul; Bultena, Gordon
The opinions of 1,585 Iowa farmers about 8 emergent agricultural technologies (energy production from feed grains and oils; energy production from livestock waste; genetic engineering research on plants, livestock, and humans; robotics for on-farm use; confinement livestock facilities; and personal computers for farm families) were found to be…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Allcock, William; Beggio, Chris
2014-10-17
U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sofrata, H.; Khoshaim, B.; Megahed, M.
1980-12-01
In this paper a computer package for the design and optimization of the simple Li-Br absorption air conditioning system, operated by solar energy, is developed in order to study its performance. This was necessary, as a first step, before carrying out any computations regarding the dual system (1-3). The computer package has the facilities of examining any parameter which may control the system; namely generator, evaporator, condenser, absorber temperatures and pumping factor. The output may be tabulated and also fed to the graph plotter. The flow chart of the programme is explained in an easy way and a typical examplemore » is included.« less
NPBalsara@lbl.gov 510-642-8973 Research profile » A U.S. Department of Energy National Laboratory Operated by the University of California UC logo Questions & Comments * Privacy & Security Notice Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion
California, Berkeley tingxu@berkeley.edu 510-642-1632 Research profile » A U.S. Department of Energy National Laboratory Operated by the University of California UC logo Questions & Comments * Privacy Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion
ERIC Educational Resources Information Center
Association of Physical Plant Administrators of Universities and Colleges, Washington, DC.
The intent of this seminar presentation was to demonstrate that with proper care in selecting and managing energy analysis programs, or in choosing commercial services to accomplish the same purposes, universities and colleges may derive significant benefits from efficient and economical use and management of their facilities. The workbook begins…
Sheppy, Michael; Beach, A.; Pless, Shanti
2016-08-09
Modern buildings are complex energy systems that must be controlled for energy efficiency. The Research Support Facility (RSF) at the National Renewable Energy Laboratory (NREL) has hundreds of controllers -- computers that communicate with the building's various control systems -- to control the building based on tens of thousands of variables and sensor points. These control strategies were designed for the RSF's systems to efficiently support research activities. Many events that affect energy use cannot be reliably predicted, but certain decisions (such as control strategies) must be made ahead of time. NREL researchers modeled the RSF systems to predict how they might perform. They then monitor these systems to understand how they are actually performing and reacting to the dynamic conditions of weather, occupancy, and maintenance.
Prediction and characterization of application power use in a high-performance computing environment
Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...
2017-02-27
Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.
Improved Planning and Programming for Energy Efficient New Army Facilities
1988-10-01
setpoints to occupant comfort must be considered carefully. Cutting off the HVAC system to the bedrooms during the day produced only small savings...functions of a building and minimizing the energy usage through optimization . It includes thermostats, time switches, programmable con- trollers...microprocessor systems, computers, and sensing devices that are linked with control and power components to manage energy use. This system optimizes load
The NASA Energy Conservation Program
NASA Technical Reports Server (NTRS)
Gaffney, G. P.
1977-01-01
Large energy-intensive research and test equipment at NASA installations is identified, and methods for reducing energy consumption outlined. However, some of the research facilities are involved in developing more efficient, fuel-conserving aircraft, and tradeoffs between immediate and long-term conservation may be necessary. Major programs for conservation include: computer-based systems to automatically monitor and control utility consumption; a steam-producing solid waste incinerator; and a computer-based cost analysis technique to engineer more efficient heating and cooling of buildings. Alternate energy sources in operation or under evaluation include: solar collectors; electric vehicles; and ultrasonically emulsified fuel to attain higher combustion efficiency. Management support, cooperative participation by employees, and effective reporting systems for conservation programs, are also discussed.
NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-09-01
NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less
Computers, Electronics, and Appliances Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Microgrids | Energy Systems Integration Facility | NREL
Manager, Marine Corps Air Station (MCAS) Miramar Network Simulator-in-the-Loop Testing OMNeT++: simulates a network and links with real computers and virtual hosts. Power Hardware-in-the-Loop Simulation
High-Performance Computing Data Center Power Usage Effectiveness |
Power Usage Effectiveness When the Energy Systems Integration Facility (ESIF) was conceived, NREL set an , ventilation, and air conditioning (HVAC), which captures fan walls, fan coils that support the data center
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
The Future is Hera! Analyzing Astronomical Over the Internet
NASA Technical Reports Server (NTRS)
Valencic, L. A.; Chai, P.; Pence, W.; Shafer, R.; Snowden, S.
2008-01-01
Hera is the data processing facility provided by the High Energy Astrophysics Science Archive Research Center (HEASARC) at the NASA Goddard Space Flight Center for analyzing astronomical data. Hera provides all the pre-installed software packages, local disk space, and computing resources need to do general processing of FITS format data files residing on the users local computer, and to do research using the publicly available data from the High ENergy Astrophysics Division. Qualified students, educators and researchers may freely use the Hera services over the internet of research and educational purposes.
NASA Technical Reports Server (NTRS)
Komatsu, G. K.; Stellen, J. M., Jr.
1976-01-01
Measurements have been made of the high energy thrust ions, (Group I), high angle/high energy ions (Group II), and high angle/low energy ions (Group IV) of a mercury electron bombardment thruster in the angular divergence range from 0 deg to greater than 90 deg. The measurements have been made as a function of thrust ion current, propellant utilization efficiency, bombardment discharge voltage, screen and accelerator grid potential (accel-decel ratio) and neutralizer keeper potential. The shape of the Group IV (charge exchange) ion plume has remained essentially fixed within the range of variation of the engine operation parameters. The magnitude of the charge exchange ion flux scales with thrust ion current, for good propellant utilization conditions. For fixed thrust ion current, charge exchange ion flux increases for diminishing propellant utilization efficiency. Facility effects influence experimental accuracies within the range of propellant utilization efficiency used in the experiments. The flux of high angle/high energy Group II ions is significantly diminished by the use of minimum decel voltages on the accelerator grid. A computer model of charge exchange ion production and motion has been developed. The program allows computation of charge exchange ion volume production rate, total production rate, and charge exchange ion trajectories for "genuine" and "facilities effects" particles. In the computed flux deposition patterns, the Group I and Group IV ion plumes exhibit a counter motion.
Matched Index of Refraction Flow Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcllroy, Hugh
What's 27 feet long, 10 feet tall and full of mineral oil (3000 gallons' worth)? If you said INL's Matched Index of Refraction facility, give yourself a gold star. Scientists use computers to model the inner workings of nuclear reactors, and MIR helps validate those models. INL's Hugh McIlroy explains in this video. You can learn more about INL energy research at the lab's facebook site http://www.facebook.com/idahonationallaboratory.
Matched Index of Refraction Flow Facility
Mcllroy, Hugh
2018-01-08
What's 27 feet long, 10 feet tall and full of mineral oil (3000 gallons' worth)? If you said INL's Matched Index of Refraction facility, give yourself a gold star. Scientists use computers to model the inner workings of nuclear reactors, and MIR helps validate those models. INL's Hugh McIlroy explains in this video. You can learn more about INL energy research at the lab's facebook site http://www.facebook.com/idahonationallaboratory.
Williams, W E
1987-01-01
The maturing of technologies in computer capabilities, particularly direct digital signals, has provided an exciting variety of new communication and facility control opportunities. These include telecommunications, energy management systems, security systems, office automation systems, local area networks, and video conferencing. New applications are developing continuously. The so-called "intelligent" or "smart" building concept evolves from the development of this advanced technology in building environments. Automation has had a dramatic effect on facility planning. For decades, communications were limited to the telephone, the typewritten message, and copy machines. The office itself and its functions had been essentially unchanged for decades. Office automation systems began to surface during the energy crisis and, although their newer technology was timely, they were, for the most part, designed separately from other new building systems. For example, most mainframe computer systems were originally stand-alone, as were word processing installations. In the last five years, the advances in distributive systems, networking, and personal computer capabilities have provided opportunities to make such dramatic improvements in productivity that the Selectric typewriter has gone from being the most advanced piece of office equipment to nearly total obsolescence.
Peter J. Daugherty; Jeremy S. Fried
2007-01-01
Landscape-scale fuel treatments for forest fire hazard reduction potentially produce large quantities of material suitable for biomass energy production. The analytic framework FIA BioSum addresses this situation by developing detailed data on forest conditions and production under alternative fuel treatment prescriptions, and computes haul costs to alternative sites...
Assessment of the MHD capability in the ATHENA code using data from the ALEX facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, P.A.
1989-03-01
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.
Electric Power Research Institute | Energy Systems Integration Facility |
-10 megawatts of aggregated generation capacity. A photo of four men looking at something one man is pointing to on a desk while another man sits at the desk typing on a computer. EPRI and Schneider Electric
DOT National Transportation Integrated Search
1998-08-01
The United States Department of Transportation, John A. Volpe National Transportation Systems : Center (Volpe Center), Acoustics Facility, in support of the Federal Aviation Administrations : Office of Environment and Energy (AEE), has recently co...
Researchers Mine Information from Next-Generation Subsurface Flow Simulations
Gedenk, Eric D.
2015-12-01
A research team based at Virginia Tech University leveraged computing resources at the US Department of Energy's (DOE's) Oak Ridge National Laboratory to explore subsurface multiphase flow phenomena that can't be experimentally observed. Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility, the team took Micro-CT images of subsurface geologic systems and created two-phase flow simulations. The team's model development has implications for computational research pertaining to carbon sequestration, oil recovery, and contaminant transport.
Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugent, Peter E.; Simonson, J. Michael
2011-10-24
This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities. Their charge was to identify current and anticipated issues inmore » the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities. The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were: Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication. Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models. Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets. Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content. It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends. Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities: Theory and analysis components should be integrated seamlessly within experimental workflow. Develop new algorithms for data analysis based on common data formats and toolsets. Move analysis closer to experiment. Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency. Match data management access and capabilities with advancements in detectors and sources. Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments. This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities. To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data« less
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Donghai
2013-05-20
Molecular adsorption of formate and carboxyl on the stoichiometric CeO2(111) and CeO2(110) surfaces was studied using periodic density functional theory (DFT+U) calculations. Two distinguishable adsorption modes (strong and weak) of formate are identified. The bidentate configuration is more stable than the monodentate adsorption configuration. Both formate and carboxyl bind at the more open CeO2(110) surface are stronger. The calculated vibrational frequencies of two adsorbed species are consistent with experimental measurements. Finally, the effects of U parameters on the adsorption of formate and carboxyl over both CeO2 surfaces were investigated. We found that the geometrical configurations of two adsorbed species aremore » not affected by using different U parameters (U=0, 5, and 7). However, the calculated adsorption energy of carboxyl pronouncedly increases with the U value while the adsorption energy of formate only slightly changes (<0.2 eV). The Bader charge analysis shows the opposite charge transfer occurs for formate and carboxyl adsorption where the adsorbed formate is negatively charge whiled the adsorbed carboxyl is positively charged. Interestingly, with the increasing U parameter, the amount of charge is also increased. This work was supported by the Laboratory Directed Research and Development (LDRD) project of the Pacific Northwest National Laboratory (PNNL) and by a Cooperative Research and Development Agreement (CRADA) with General Motors. The computations were performed using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at PNNL in Richland, Washington. Part of the computing time was also granted by the National Energy Research Scientific Computing Center (NERSC)« less
Simulation Enabled Safeguards Assessment Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Bean; Trond Bjornard; Thomas Larson
2007-09-01
It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broders, M.A.; Ruppel, F.R.
1993-05-01
Under the provisions of Interagency Agreement DOE 1938-B090-A1 between the US Department of Energy (DOE) and the US Army Europe (USAREUR), Martin Marietta Energy Systems, Inc., is providing technical assistance to USAREUR in the areas of computer science, information engineering, energy studies, and engineering and systems development. One of the initial projects authorized under this interagency agreement is the evaluation of utility and energy monitoring and control systems (UEMCSs) installed at selected US Army installations in Europe. This report is an evaluation of the overall energy-conservation effectiveness and use of the UEMCS at the 409th Base Support Battalion located inmore » Grafenwoehr, Germany. The 409th Base Support Battalion is a large USAREUR military training facility that comprises a large training area, leased housing, the main post area, and the camp areas that include Camps Aachen, Algier, Normandy, Cheb, and Kasserine. All of these facilities are consumers of electrical and thermal energy. However, only buildings and facilities in the main post area and Camps Aachen, Algier, and Normandy are under the control of the UEMCS. The focus of this evaluation report is on these specific areas. Recommendations to further increase energy and cost savings and to improve operation of the UEMCS are proposed.« less
High-Performance Computing User Facility | Computational Science | NREL
User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access
The Energy Efficiency Potential of Cloud-Based Software: A U.S. Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masanet, Eric; Shehabi, Arman; Liang, Jiaqi
The energy use of data centers is a topic that has received much attention, given that data centers currently account for 1-2% of global electricity use. However, cloud computing holds great potential to reduce data center energy demand moving forward, due to both large reductions in total servers through consolidation and large increases in facility efficiencies compared to traditional local data centers. However, analyzing the net energy implications of shifts to the cloud can be very difficult, because data center services can affect many different components of society’s economic and energy systems.
NASA Astrophysics Data System (ADS)
Parker, Tehri Davenport
1997-09-01
This study designed, implemented, and evaluated an environmental education hypermedia program for use in a residential environmental education facility. The purpose of the study was to ascertain whether a hypermedia program could increase student knowledge and positive attitudes toward the environment and environmental education. A student/computer interface, based on the theory of social cognition, was developed to direct student interactions with the computer. A quasi-experimental research design was used. Students were randomly assigned to either the experimental or control group. The experimental group used the hypermedia program to learn about the topic of energy. The control group received the same conceptual information from a teacher/naturalist. An Environmental Awareness Quiz was administered to measure differences in the students' cognitive understanding of energy issues. Students participated in one on one interviews to discuss their attitudes toward the lesson and the overall environmental education experience. Additionally, members of the experimental group were tape recorded while they used the hypermedia program. These tapes were analyzed to identify aspects of the hypermedia program that promoted student learning. The findings of this study suggest that computers, and hypermedia programs, can be integrated into residential environmental education facilities, and can assist environmental educators in meeting their goals for students. The study found that the hypermedia program was as effective as the teacher/naturalist for teaching about environmental education material. Students who used the computer reported more positive attitudes toward the lesson on energy, and thought that they had learned more than the control group. Students in the control group stated that they did not learn as much as the computer group. The majority of students had positive attitudes toward the inclusion of computers in the camp setting, and stated that they were a good way to learn about environmental education material. This study also identified lack of social skills as a barrier to social cognition among mixed gender groups using the computer program.
Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, AD; Page, Christina; Lytle, Bob
The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less
Challenges in scaling NLO generators to leadership computers
NASA Astrophysics Data System (ADS)
Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.
2017-10-01
Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.
Workflow Management Systems for Molecular Dynamics on Leadership Computers
NASA Astrophysics Data System (ADS)
Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu
Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.
Stockpile Stewardship: How We Ensure the Nuclear Deterrent Without Testing
None
2018-01-16
In the 1990s, the U.S. nuclear weapons program shifted emphasis from developing new designs to dismantling thousands of existing weapons and maintaining a much smaller enduring stockpile. The United States ceased underground nuclear testing, and the Department of Energy created the Stockpile Stewardship Program to maintain the safety, security, and reliability of the U.S. nuclear deterrent without full-scale testing. This video gives a behind the scenes look at a set of unique capabilities at Lawrence Livermore that are indispensable to the Stockpile Stewardship Program: high performance computing, the Superblock category II nuclear facility, the JASPER a two stage gas gun, the High Explosive Applications Facility (HEAF), the National Ignition Facility (NIF), and the Site 300 contained firing facility.
Expanding Your Laboratory by Accessing Collaboratory Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, David W.; Burton, Sarah D.; Peterson, Michael R.
2004-03-01
The Environmental Molecular Sciences Laboratory (EMSL) in Richland, Washington, is the home of a research facility setup by the United States Department of Energy (DOE). The facility is atypical because it houses over 100 cutting-edge research systems for the use of researchers all over the United States and the world. Access to the lab is requested through a peer-review proposal process and the scientists who use the facility are generally referred to as ‘users’. There are six main research facilities housed in EMSL, all of which host visiting researchers. Several of these facilities also participate in the EMSL Collaboratory, amore » remote access capability supported by EMSL operations funds. Of these, the High-Field Magnetic Resonance Facility (HFMRF) and Molecular Science Computing Facility (MSCF) have a significant number of their users performing remote work. The HFMRF in EMSL currently houses 12 NMR spectrometers that range in magnet field strength from 7.05T to 21.1T. Staff associated with the NMR facility offers scientific expertise in the areas of structural biology, solid-state materials/catalyst characterization, and magnetic resonance imaging (MRI) techniques. The way in which the HFMRF operates, with a high level of dedication to remote operation across the full suite of High-Field NMR spectrometers, has earned it the name “Virtual NMR Facility”. This review will focus on the operational aspects of remote research done in the High-Field Magnetic Resonance Facility and the computer tools that make remote experiments possible.« less
Activation calculations for trapped protons below 200 MeV: Appendix
NASA Technical Reports Server (NTRS)
Laird, C. E.
1991-01-01
Tables are given displaying of the results of the activation calculations of metal samples and other material aboard the Long Duration Exposure Facility-1 (LDEF-1) and Spacelab-2 with the computer program, PTRAP4. The computer printouts give the reaction, the reactant product, the proton reaction cross sections as a function of the energy of the incident protons, and the activation as a function of distance into the sample from the exposed surface.
Cloud@Home: A New Enhanced Computing Paradigm
NASA Astrophysics Data System (ADS)
Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco
Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, P.A.
1988-10-28
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility. 13 refs., 4more » figs., 2 tabs.« less
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
Hera: High Energy Astronomical Data Analysis via the Internet
NASA Astrophysics Data System (ADS)
Valencic, Lynne A.; Chai, P.; Pence, W.; Snowden, S.
2011-09-01
The HEASARC at NASA Goddard Space Flight Center has developed Hera, a data processing facility for analyzing high energy astronomical data over the internet. Hera provides all the software packages, disk space, and computing resources needed to do general processing of and advanced research on publicly available data from High Energy Astrophysics missions. The data and data products are kept on a server at GSFC and can be downloaded to a user's local machine. This service is provided for free to students, educators, and researchers for educational and research purposes.
Basic energy sciences: Summary of accomplishments
NASA Astrophysics Data System (ADS)
1990-05-01
For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.
Basic Energy Sciences: Summary of Accomplishments
DOE R&D Accomplishments Database
1990-05-01
For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy-related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user'' facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.
Oklahoma Center for High Energy Physics (OCHEP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, S; Strauss, M J; Snow, J
2012-02-29
The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Papri; Prokopchuk, Demyan E.; Mock, Michael T.
2017-03-01
This review examines the synthesis and acid reactivity of transition metal dinitrogen complexes bearing diphosphine ligands containing pendant amine groups in the second coordination sphere. This manuscript is a review of the work performed in the Center for Molecular Electrocatalysis. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences. EPR studies on Fe were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located atmore » PNNL. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. DOE.« less
On the Mechanism and Rate of Spontaneous Decomposition of Amino Acids
Alexandrova, Anastassia N.; Jorgensen, William L.
2011-01-01
Spontaneous decarboxylation of amino acids is among the slowest known reactions; it is much less facile than the cleavage of amide bonds in polypeptides. Establishment of the kinetics and mechanisms for this fundamental reaction is important for gauging the proficiency of enzymes. In the present study, multiple mechanisms for glycine decomposition in water are explored using QM/MM Monte Carlo simulations and free energy perturbation theory. Simple CO2 detachment emerges as the preferred pathway for decarboxylation; it is followed by water-assisted proton transfer to yield the products, CO2 and methylamine. The computed free energy of activation of 45 kcal/mol, and the resulting rate-constant of 1 × 10−21 s−1, can be compared with an extrapolated experimental rate constant of ~2 × 10−17 s−1 at 25 °C. The half-life for the reaction is more than 1 billion years. Furthermore, examination of deamination finds simple NH3-detachment yielding α-lactone to be the favored route, though it is less facile than decarboxylation by kcal/mol. Ab initio and DFT calculations with the CPCM hydration model were also carried out for the reactions; the computed free energies of activation for glycine decarboxylation agree with the QM/MM result, while deamination is predicted to be more favorable. QM/MM calculations were also performed for decarboxylation of alanine; the computed barrier is 2 kcal/mol higher than for glycine in qualitative accord with experiment. PMID:21995727
The Argonne Leadership Computing Facility 2010 annual report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drugan, C.
Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued tomore » provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.« less
Laboratory Directed Research and Development Program FY 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen
2007-03-08
The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; Gough, Sean T.
This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.
1991-12-30
series of investigations that have been carried out for the application of a packed bed (with encapsulated phase change material-PCM) as an energy storage...The condensing flow of a single vapor through a porous medium, on the other hand, received relatively little attention (Nilson and Montoya , 1980...analysis that does not seem to be feasible even %kith the most advanced computational facilities. The fundamentals of the application of this technique
Argonne Discovery Yields Self-Healing Diamond-Like Carbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunningham, Greg; Jones, Katie Elyce
We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.
Argonne Discovery Yields Self-Healing Diamond-Like Carbon
Cunningham, Greg; Jones, Katie Elyce
2016-10-27
We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.
The Future is Hera: Analyzing Astronomical Data Over the Internet
NASA Astrophysics Data System (ADS)
Valencic, Lynne A.; Snowden, S.; Chai, P.; Shafer, R.
2009-01-01
Hera is the new data processing facility provided by the HEASARC at the NASA Goddard Space Flight Center for analyzing astronomical data. Hera provides all the preinstalled software packages, local disk space, and computing resources needed to do general processing of FITS format data files residing on the user's local computer, and to do advanced research using the publicly available data from High Energy Astrophysics missions. Qualified students, educators, and researchers may freely use the Hera services over the internet for research and educational purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, P.; Martin, D.; Drugan, C.
2010-11-23
This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less
DOE/ NREL Build One of the World's Most Energy Efficient Office Spaces
Radocy, Rachel; Livingston, Brian; von Luhrte, Rich
2018-05-18
Technology â from sophisticated computer modeling to advanced windows that actually open â will help the newest building at the U.S. Department of Energy's (DOE) National Renewable Energy Laboratory (NREL) be one of the world's most energy efficient offices. Scheduled to open this summer, the 222,000 square-foot RSF will house more than 800 staff and an energy efficient information technology data center. Because 19 percent of the country's energy is used by commercial buildings, DOE plans to make this facility a showcase for energy efficiency. DOE hopes the design of the RSF will be replicated by the building industry and help reduce the nation's energy consumption by changing the way commercial buildings are designed and built.
Stockpile Stewardship: How We Ensure the Nuclear Deterrent Without Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-09-04
In the 1990s, the U.S. nuclear weapons program shifted emphasis from developing new designs to dismantling thousands of existing weapons and maintaining a much smaller enduring stockpile. The United States ceased underground nuclear testing, and the Department of Energy created the Stockpile Stewardship Program to maintain the safety, security, and reliability of the U.S. nuclear deterrent without full-scale testing. This video gives a behind the scenes look at a set of unique capabilities at Lawrence Livermore that are indispensable to the Stockpile Stewardship Program: high performance computing, the Superblock category II nuclear facility, the JASPER a two stage gas gun,more » the High Explosive Applications Facility (HEAF), the National Ignition Facility (NIF), and the Site 300 contained firing facility.« less
Bulk Enthalpy Calculations in the Arc Jet Facility at NASA ARC
NASA Technical Reports Server (NTRS)
Thompson, Corinna S.; Prabhu, Dinesh; Terrazas-Salinas, Imelda; Mach, Jeffrey J.
2011-01-01
The Arc Jet Facilities at NASA Ames Research Center generate test streams with enthalpies ranging from 5 MJ/kg to 25 MJ/kg. The present work describes a rigorous method, based on equilibrium thermodynamics, for calculating the bulk enthalpy of the flow produced in two of these facilities. The motivation for this work is to determine a dimensionally-correct formula for calculating the bulk enthalpy that is at least as accurate as the conventional formulas that are currently used. Unlike previous methods, the new method accounts for the amount of argon that is present in the flow. Comparisons are made with bulk enthalpies computed from an energy balance method. An analysis of primary facility operating parameters and their associated uncertainties is presented in order to further validate the enthalpy calculations reported herein.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, Jill; Corones, James; Batchelor, Donald
Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less
Computer-Aided Facilities Management Systems (CAFM).
ERIC Educational Resources Information Center
Cyros, Kreon L.
Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
NASA Astrophysics Data System (ADS)
Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.
2015-05-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.
Pacific Northwest National Laboratory Annual Site Environmental Report for Calendar Year 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, Joanne P.; Sackschewsky, Michael R.; Tilden, Harold T.
2014-09-30
Pacific Northwest National Laboratory (PNNL), one of the U.S. Department of Energy (DOE) Office of Science’s 10 national laboratories, provides innovative science and technology development in the areas of energy and the environment, fundamental and computational science, and national security. DOE’s Pacific Northwest Site Office (PNSO) is responsible for oversight of PNNL at its Campus in Richland, Washington, as well as its facilities in Sequim, Seattle, and North Bonneville, Washington, and Corvallis and Portland, Oregon.
STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Geoffrey; Jha, Shantenu; Ramakrishnan, Lavanya
The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), weremore » conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.« less
TomoBank: a tomographic data repository for computational x-ray science
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; ...
2018-02-08
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'07)
NASA Astrophysics Data System (ADS)
Sobie, Randall; Tafirout, Reda; Thomson, Jana
2007-07-01
The 2007 International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 2-7 September 2007 in Victoria, British Columbia, Canada. CHEP is a major series of international conferences for physicists and computing professionals from the High Energy and Nuclear Physics community, Computer Science and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing, and future activities. The CHEP'07 conference had close to 500 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising oral and poster presentations, and an industrial exhibition. Conference tracks covered topics in Online Computing, Event Processing, Software Components, Tools and Databases, Software Tools and Information Systems, Computing Facilities, Production Grids and Networking, Grid Middleware and Tools, Distributed Data Analysis and Information Management and Collaborative Tools. The conference included a successful whale-watching excursion involving over 200 participants and a banquet at the Royal British Columbia Museum. The next CHEP conference will be held in Prague in March 2009. We would like thank the sponsors of the conference and the staff at the TRIUMF Laboratory and the University of Victoria who made the CHEP'07 a success. Randall Sobie and Reda Tafirout CHEP'07 Conference Chairs
Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities
ERIC Educational Resources Information Center
Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David
2005-01-01
Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…
Design and test of a 10kW ORC supersonic turbine generator
NASA Astrophysics Data System (ADS)
Seume, J. R.; Peters, M.; Kunte, H.
2017-03-01
Manufactures are searching for possibilities to increase the efficiency of combustion engines by using the remaining energy of the exhaust gas. One possibility to recover some of this thermal energy is an organic Rankine cycle (ORC). For such an ORC running with ethanol, the aerothermodynamic design and test of a supersonic axial, single stage impulse turbine generator unit is described. The blade design as well as the regulation by variable partial admission is shown. Additionally the mechanical design of the directly coupled turbine generator unit including the aerodynamic sealing and the test facility is presented. Finally the results of CFD-based computations are compared to the experimental measurements. The comparison shows a remarkably good agreement between the numerical computations and the test data.
NREL, Hewlett-Packard Developed Ultra-Efficient, High-Performance Computing
and allows the heat captured from the supercomputer to provide all the heating needs for the Energy Systems Integration Facility. And there's even enough heat left over to melt snow outside on sidewalks during the winter. During the summer, the unused heat can be rejected via cooling towers. R&D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aronson, A.L.; Gordon, D.M.
IN APRIL 1996, THE UNITED STATES (US) ADDED THE PORTSMOUTH GASEOUS DIFFUSION PLANT TO THE LIST OF FACILITIES ELIGIBLE FOR THE APPLICATION OF INTERNATIONAL ATOMIC ENERGY AGENCY (IAEA) SAFEGUARDS. AT THAT TIME, THE US PROPOSED THAT THE IAEA CARRY OUT A ''VERIFICATION EXPERIMENT'' AT THE PLANT WITH RESPECT TO DOOWNBLENDING OF ABOUT 13 METRIC TONS OF HIGHLY ENRICHED URANIUM (HEU) IN THE FORM OF URANIUM HEXAFLUROIDE (UF6). DURING THE PERIOD DECEMBER 1997 THROUGH JULY 1998, THE IAEA CARRIED OUT THE REQUESTED VERIFICATION EXPERIMENT. THE VERIFICATION APPROACH USED FOR THIS EXPERIMENT INCLUDED, AMONG OTHER MEASURES, THE ENTRY OF PROCESS-OPERATIONAL DATA BYmore » THE FACILITY OPERATOR ON A NEAR-REAL-TIME BASIS INTO A ''MAILBOX'' COMPUTER LOCATED WITHIN A TAMPER-INDICATING ENCLOSURE SEALED BY THE IAEA.« less
Nuclear Computational Low Energy Initiative (NUCLEI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Sanjay K.
This is the final report for University of Washington for the NUCLEI SciDAC-3. The NUCLEI -project, as defined by the scope of work, will develop, implement and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics to be studied include the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques to be used include Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program will emphasize areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS andmore » FRIB (nuclear structure and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrino-less double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less
The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case
NASA Astrophysics Data System (ADS)
Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.
2017-10-01
The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.
NASA Astrophysics Data System (ADS)
Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.
2015-12-01
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
Techbelt Energy Innovation Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marie, Hazel; Nestic, Dave; Hripko, Michael
This project consisted of three main components 1) The primary goal of the project was to renovate and upgrade an existing commercial building to the highest possible environmentally sustainable level for the purpose of creating an energy incubator. This initiative was part of the Infrastructure Technologies Program, through which a sustainable energy demonstration facility was to be created and used as a research and community outreach base for sustainable energy product and process incubation; 2) In addition, fundamental energy related research on wind energy was performed; a shrouded wind turbine on the Youngstown State University campus was commissioned; and educationalmore » initiatives were implemented; and 3) The project also included an education and outreach component to inform and educate the public in sustainable energy production and career opportunities. Youngstown State University and the Tech Belt Energy Innovation Center (TBEIC) renovated a 37,000 square foot urban building which is now being used as a research and development hub for the region’s energy technology innovation industry. The building houses basic research facilities and business development in an incubator format. In addition, the TBEIC performs community outreach and education initiatives in advanced and sustainable energy. The building is linked to a back warehouse which will eventually be used as a build-out for energy laboratory facilities. The projects research component investigated shrouded wind turbines, and specifically the “Windcube” which was renamed the “Wind Sphere” during the course of the project. There was a specific focus on the development in the theory of shrouded wind turbines. The goal of this work was to increase the potential efficiency of wind turbines by improving the lift and drag characteristics. The work included computational modeling, scale models and full-sized design and construction of a test turbine. The full-sized turbine was built on the YSU campus as a grid-tie system that supplies the YSU research facility. Electrical power meters and weather monitors were installed to record the power generated and aid in continued study. In addition, an education/outreach component to help elicit creative engineering and design from amongst area students, faculty, entrepreneurs, and small business in the energy related fields was performed.« less
Brief Survey of TSC Computing Facilities
DOT National Transportation Integrated Search
1972-05-01
The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...
Recent Accomplishments and Future Directions in US Fusion Safety & Environmental Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
David A. Petti; Brad J. Merrill; Phillip Sharpe
2006-07-01
The US fusion program has long recognized that the safety and environmental (S&E) potential of fusion can be attained by prudent materials selection, judicious design choices, and integration of safety requirements into the design of the facility. To achieve this goal, S&E research is focused on understanding the behavior of the largest sources of radioactive and hazardous materials in a fusion facility, understanding how energy sources in a fusion facility could mobilize those materials, developing integrated state of the art S&E computer codes and risk tools for safety assessment, and evaluating S&E issues associated with current fusion designs. In thismore » paper, recent accomplishments are reviewed and future directions outlined.« less
SARS: Safeguards Accounting and Reporting Software
NASA Astrophysics Data System (ADS)
Mohammedi, B.; Saadi, S.; Ait-Mohamed, S.
In order to satisfy the requirements of the SSAC (State System for Accounting and Control of nuclear materials), for recording and reporting objectives; this computer program comes to bridge the gape between nuclear facilities operators and national inspection verifying records and delivering reports. The SARS maintains and generates at-facility safeguards accounting records and generates International Atomic Energy Agency (IAEA) safeguards reports based on accounting data input by the user at any nuclear facility. A database structure is built and BORLAND DELPHI programming language has been used. The software is designed to be user-friendly, to make extensive and flexible management of menus and graphs. SARS functions include basic physical inventory tacking, transaction histories and reporting. Access controls are made by different passwords.
The Magellan Final Report on Cloud Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
,; Coghlan, Susan; Yelick, Katherine
The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less
Closely Spaced Independent Parallel Runway Simulation.
1984-10-01
facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where
Science & Technology Review June 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A
2012-04-20
This month's issue has the following articles: (1) A New Era in Climate System Analysis - Commentary by William H. Goldstein; (2) Seeking Clues to Climate Change - By comparing past climate records with results from computer simulations, Livermore scientists can better understand why Earth's climate has changed and how it might change in the future; (3) Finding and Fixing a Supercomputer's Faults - Livermore experts have developed innovative methods to detect hardware faults in supercomputers and help applications recover from errors that do occur; (4) Targeting Ignition - Enhancements to the cryogenic targets for National Ignition Facility experiments aremore » furthering work to achieve fusion ignition with energy gain; (5) Neural Implants Come of Age - A new generation of fully implantable, biocompatible neural prosthetics offers hope to patients with neurological impairment; and (6) Incubator Busy Growing Energy Technologies - Six collaborations with industrial partners are using the Laboratory's high-performance computing resources to find solutions to urgent energy-related problems.« less
ASCR Cybersecurity for Scientific Computing Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piesert, Sean
The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE tomore » execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laros, James H.; Grant, Ryan; Levenhagen, Michael J.
Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.
Model documentation renewable fuels module of the National Energy Modeling System
NASA Astrophysics Data System (ADS)
1995-06-01
This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1995 Annual Energy Outlook (AEO95) forecasts. The report catalogs and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. The RFM consists of six analytical submodules that represent each of the major renewable energy resources -- wood, municipal solid waste (MSW), solar energy, wind energy, geothermal energy, and alcohol fuels. The RFM also reads in hydroelectric facility capacities and capacity factors from a data file for use by the NEMS Electricity Market Module (EMM). The purpose of the RFM is to define the technological, cost, and resource size characteristics of renewable energy technologies. These characteristics are used to compute a levelized cost to be competed against other similarly derived costs from other energy sources and technologies. The competition of these energy sources over the NEMS time horizon determines the market penetration of these renewable energy technologies. The characteristics include available energy capacity, capital costs, fixed operating costs, variable operating costs, capacity factor, heat rate, construction lead time, and fuel product price.
Distributed computing testbed for a remote experimental environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butner, D.N.; Casper, T.A.; Howard, B.C.
1995-09-18
Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less
Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.
2017-07-01
The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less
Scofield, Patricia A; Smith, Linda L; Johnson, David N
2017-07-01
The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y-12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations on Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package-1988 computer model files. This database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.
The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less
Multiscale Computation. Needs and Opportunities for BER Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheibe, Timothy D.; Smith, Jeremy C.
2015-01-01
The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSLmore » decisions regarding future computational (hardware and software) architectures.« less
NASA Astrophysics Data System (ADS)
Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira; Shigyo, Nobuhiro; Watanabe, Fusao; Sakurai, Hiroki; Arai, Yoichi
2011-05-01
A dose and spectrum monitoring system applicable to neutrons, photons and muons over wide ranges of energy, designated as DARWIN, has been developed for radiological protection in high-energy accelerator facilities. DARWIN consists of a phoswitch-type scintillation detector, a data-acquisition (DAQ) module for digital waveform analysis, and a personal computer equipped with a graphical-user-interface (GUI) program for controlling the system. The system was recently upgraded by introducing an original DAQ module based on a field programmable gate array, FPGA, and also by adding a function for estimating neutron and photon spectra based on an unfolding technique without requiring any specific scientific background of the user. The performance of the upgraded DARWIN was examined in various radiation fields, including an operational field in J-PARC. The experiments revealed that the dose rates and spectra measured by the upgraded DARWIN are quite reasonable, even in radiation fields with peak structures in terms of both spectrum and time variation. These results clearly demonstrate the usefulness of DARWIN for improving radiation safety in high-energy accelerator facilities.
NASA Astrophysics Data System (ADS)
Garcia-Santiago, C. A.; Del Ser, J.; Upton, C.; Quilligan, F.; Gil-Lopez, S.; Salcedo-Sanz, S.
2015-11-01
When seeking near-optimal solutions for complex scheduling problems, meta-heuristics demonstrate good performance with affordable computational effort. This has resulted in a gravitation towards these approaches when researching industrial use-cases such as energy-efficient production planning. However, much of the previous research makes assumptions about softer constraints that affect planning strategies and about how human planners interact with the algorithm in a live production environment. This article describes a job-shop problem that focuses on minimizing energy consumption across a production facility of shared resources. The application scenario is based on real facilities made available by the Irish Center for Manufacturing Research. The formulated problem is tackled via harmony search heuristics with random keys encoding. Simulation results are compared to a genetic algorithm, a simulated annealing approach and a first-come-first-served scheduling. The superior performance obtained by the proposed scheduler paves the way towards its practical implementation over industrial production chains.
Modeling Laboratory Astrophysics Experiments using the CRASH code
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team
2013-10-01
The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.
Reducing power usage on demand
NASA Astrophysics Data System (ADS)
Corbett, G.; Dewhurst, A.
2016-10-01
The Science and Technology Facilities Council (STFC) datacentre provides large- scale High Performance Computing facilities for the scientific community. It currently consumes approximately 1.5MW and this has risen by 25% in the past two years. STFC has been investigating leveraging preemption in the Tier 1 batch farm to save power. HEP experiments are increasing using jobs that can be killed to take advantage of opportunistic CPU resources or novel cost models such as Amazon's spot pricing. Additionally, schemes from energy providers are available that offer financial incentives to reduce power consumption at peak times. Under normal operating conditions, 3% of the batch farm capacity is wasted due to draining machines. By using preempt-able jobs, nodes can be rapidly made available to run multicore jobs without this wasted resource. The use of preempt-able jobs has been extended so that at peak times machines can be hibernated quickly to save energy. This paper describes the implementation of the above and demonstrates that STFC could in future take advantage of such energy saving schemes.
Data Crosscutting Requirements Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleese van Dam, Kerstin; Shoshani, Arie; Plata, Charity
2013-04-01
In April 2013, a diverse group of researchers from the U.S. Department of Energy (DOE) scientific community assembled to assess data requirements associated with DOE-sponsored scientific facilities and large-scale experiments. Participants in the review included facilities staff, program managers, and scientific experts from the offices of Basic Energy Sciences, Biological and Environmental Research, High Energy Physics, and Advanced Scientific Computing Research. As part of the meeting, review participants discussed key issues associated with three distinct aspects of the data challenge: 1) processing, 2) management, and 3) analysis. These discussions identified commonalities and differences among the needs of varied scientific communities.more » They also helped to articulate gaps between current approaches and future needs, as well as the research advances that will be required to close these gaps. Moreover, the review provided a rare opportunity for experts from across the Office of Science to learn about their collective expertise, challenges, and opportunities. The "Data Crosscutting Requirements Review" generated specific findings and recommendations for addressing large-scale data crosscutting requirements.« less
Laboratory directed research and development program FY 1999
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Todd; Levy, Karin
2000-03-08
The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness. This is the annual report on Laboratory Directed Research and Development (LDRD) program for FY99.« less
Energy System Integration Facility Secure Data Center | Energy Systems
Integration Facility | NREL Energy System Integration Facility Secure Data Center Energy System Integration Facility Secure Data Center The Energy Systems Integration Facility's Secure Data Center provides
The Practical Obstacles of Data Transfer: Why researchers still love scp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T
The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less
Energy Systems Integration Facility Control Room | Energy Systems
Integration Facility | NREL Energy Systems Integration Facility Control Room Energy Systems Integration Facility Control Room The Energy Systems Integration Facility control room allows system engineers as the monitoring point for the facility's integrated safety and control systems. Photo of employees
Pan Am gets big savings at no cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanz, D.
Pan American World Airways' contract with an energy management control systems distributor enabled the company's terminal and maintenance facilities at JFK airport in New York to shift from housekeeping to major savings without additional cost. Energy savings from a pneumatic control system were split almost equally between Pan Am and Thomas S. Brown Associates (TSBA) Inc., and further savings are expected from a planned computer-controlled system. A full-time energy manager, able to give top priority to energy-consumption problems, was considered crucial to the program's success. Early efforts in light-level reduction and equipment scheduling required extensive persuasion and policing, but successfulmore » energy savings allowed the manager to progress to the more-extensive plants with TSBA.« less
PNNL streamlines energy-guzzling computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, Mary T.; Marquez, Andres
In a room the size of a garage, two rows of six-foot-tall racks holding supercomputer hard drives sit back-to-back. Thin tubes and wires snake off the hard drives, slithering into the corners. Stepping between the rows, a rush of heat whips around you -- the air from fans blowing off processing heat. But walk farther in, between the next racks of hard drives, and the temperature drops noticeably. These drives are being cooled by a non-conducting liquid that runs right over the hardworking processors. The liquid carries the heat away in tubes, saving the air a few degrees. This ismore » the Energy Smart Data Center at Pacific Northwest National Laboratory. The bigger, faster, and meatier supercomputers get, the more energy they consume. PNNL's Andres Marquez has developed this test bed to learn how to train the behemoths in energy efficiency. The work will help supercomputers perform better as well. Processors have to keep cool or suffer from "thermal throttling," says Marquez. "That's the performance threshold where the computer is too hot to run well. That threshold is an industry secret." The center at EMSL, DOE's national scientific user facility at PNNL, harbors several ways of experimenting with energy usage. For example, the room's air conditioning is isolated from the rest of EMSL -- pipes running beneath the floor carry temperature-controlled water through heat exchangers to cooling towers outside. "We can test whether it's more energy efficient to cool directly on the processing chips or out in the water tower," says Marquez. The hard drives feed energy and temperature data to a network server running specially designed software that controls and monitors the data center. To test the center’s limits, the team runs the processors flat out – not only on carefully controlled test programs in the Energy Smart computers, but also on real world software from other EMSL research, such as regional weather forecasting models. Marquez's group is also developing "power aware computing", where the computer programs themselves perform calculations more energy efficiently. Maybe once computers get smart about energy, they'll have tips for their users.« less
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
Analysis of Application Power and Schedule Composition in a High Performance Computing Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb
As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less
JAERI instrumented spool piece performance in two-phase flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colson, J.B.; Gilbert, J.V.
1979-01-01
Instrumented spool pieces to be installed in horizontal piping on the Cylindrical Core Test Facility (CCTF) at the Japanese Atomic Energy Institute (JAERI) have been designed and tested. The instrumented spool pieces will provide measurements from which mass flow rates can be computed. The primary instruments included in the spool pieces are a full-flow turbine, a full-flow perforated drag plate, and a low energy three-beam photon densitometer. Secondary instruments are provided to measured absolute pressure, fluid temperature, and differential pressure across the full-flow perforated drag plate.
Achieving production-level use of HEP software at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.
2015-12-01
HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.
TomoBank: a tomographic data repository for computational x-ray science
NASA Astrophysics Data System (ADS)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; Joost Batenburg, K.; Ludwig, Wolfgang; Mancini, Lucia; Marone, Federica; Mokso, Rajmund; Pelt, Daniël M.; Sijbers, Jan; Rivers, Mark
2018-03-01
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology have made sub-second and multi-energy tomographic data collection possible (Gibbs et al 2015 Sci. Rep. 5 11824), but have also increased the demand to develop new reconstruction methods able to handle in situ (Pelt and Batenburg 2013 IEEE Trans. Image Process. 22 5238-51) and dynamic systems (Mohan et al 2015 IEEE Trans. Comput. Imaging 1 96-111) that can be quickly incorporated in beamline production software (Gürsoy et al 2014 J. Synchrotron Radiat. 21 1188-93). The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.
High pressure, energy, and impulse loading of the wall in a 1-GJ Laboratory Microfusion Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrach, R.J.
1989-07-24
A proposed Laboratory Microfusion Facility (LMF) must be able to withstand repeated, low-repetition-rate fusion explosions at the 1-GJ (one-quarter ton) yield level. The energy release will occur at the center of a chamber only a few meters in radius, subjecting the interior or first wall to severe levels of temperature, pressure, and impulse. We show by theory and computation that the wall loading can be ameliorated by interposing a spherical shell of low-Z material between the fuel and the wall. This sacrificial shield converts the source energy components that are most damaging to the wall (soft x-rays and fast ions)more » to more benign plasma kinetic energy from the vaporized shield, and stretches the time duration over which this energy is delivered to the wall from nanoseconds to microseconds. Numerical calculations emphasize thin, volleyball-sized plastic shields, and much thicker ones of frozen nitrogen. Wall shielding criteria of small (or no) amount of surface ablation, low impulse and pressure loading, minimal shrapnel danger, small expense, and convenience in handling all favor the thin plastic shields. 7 refs., 4 figs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husler, R.O.; Weir, T.J.
1991-01-01
An enhanced maintenance program is being established to characterize and monitor cables, components, and process response at the Savannah River Site, Defense Waste Processing Facility. This facility was designed and constructed to immobilize the radioactive waste currently stored in underground storage tanks and is expected to begin operation in 1993. The plant is initiating the program to baseline and monitor instrument and control (I C) and electrical equipment, remote process equipment, embedded instrument and control cables, and in-cell jumper cables used in the facility. This program is based on the electronic characterization and diagnostic (ECAD) system which was modified tomore » include process response analysis and to meet rigid Department of Energy equipment requirements. The system consists of computer-automated, state-of-the-art electronics. The data that are gathered are stored in a computerized database for analysis, trending, and troubleshooting. It is anticipated that the data which are gathered and trended will aid in life extension for the facility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husler, R.O.; Weir, T.J.
1991-12-31
An enhanced maintenance program is being established to characterize and monitor cables, components, and process response at the Savannah River Site, Defense Waste Processing Facility. This facility was designed and constructed to immobilize the radioactive waste currently stored in underground storage tanks and is expected to begin operation in 1993. The plant is initiating the program to baseline and monitor instrument and control (I&C) and electrical equipment, remote process equipment, embedded instrument and control cables, and in-cell jumper cables used in the facility. This program is based on the electronic characterization and diagnostic (ECAD) system which was modified to includemore » process response analysis and to meet rigid Department of Energy equipment requirements. The system consists of computer-automated, state-of-the-art electronics. The data that are gathered are stored in a computerized database for analysis, trending, and troubleshooting. It is anticipated that the data which are gathered and trended will aid in life extension for the facility.« less
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
Klimentov, A.; Buncic, P.; De, K.; ...
2015-05-22
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimentov, A.; Buncic, P.; De, K.
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
A Novel Sensor Platform Matching the Improved Version of IPMVP Option C for Measuring Energy Savings
Tseng, Yen-Chieh; Lee, Da-Sheng; Lin, Cheng-Fang; Chang, Ching-Yuan
2013-01-01
It is easy to measure energy consumption with a power meter. However, energy savings cannot be directly computed by the powers measured using existing power meter technologies, since the power consumption only reflects parts of the real energy flows. The International Performance Measurement and Verification Protocol (IPMVP) was proposed by the Efficiency Valuation Organization (EVO) to quantify energy savings using four different methodologies of A, B, C and D. Although energy savings can be estimated following the IPMVP, there are limitations on its practical implementation. Moreover, the data processing methods of the four IPMVP alternatives use multiple sensors (thermometer, hygrometer, Occupant information) and power meter readings to simulate all facilities, in order to determine an energy usage benchmark and the energy savings. This study proposes a simple sensor platform to measure energy savings. Using usually the Electronic Product Code (EPC) global standard, an architecture framework for an information system is constructed that integrates sensors data, power meter readings and occupancy conditions. The proposed sensor platform is used to monitor a building with a newly built vertical garden system (VGS). A VGS shields solar radiation and saves on energy that would be expended on air-conditioning. With this platform, the amount of energy saved in the whole facility is measured and reported in real-time. The data are compared with those obtained from detailed measurement and verification (M&V) processes. The discrepancy is less than 1.565%. Using measurements from the proposed sensor platform, the energy savings for the entire facility are quantified, with a resolution of ±1.2%. The VGS gives an 8.483% daily electricity saving for the building. Thus, the results show that the simple sensor platform proposed by this study is more widely applicable than the four complicated IPMVP alternatives and the VGS is an effective tool in reducing the carbon footprint of a building. PMID:23698273
ERIC Educational Resources Information Center
Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.
2006-01-01
Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…
Energy Systems Integration Facility Videos | Energy Systems Integration
Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid
Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim
2012-10-01
A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim
2012-10-15
Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenthmore » value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.« less
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
On the Reaction Mechanism of Acetaldehyde Decomposition on Mo(110)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Donghai; Karim, Ayman M.; Wang, Yong
2012-02-16
The strong Mo-O bond strength provides promising reactivity of Mo-based catalysts for the deoxygenation of biomass-derived oxygenates. Combining the novel dimer saddle point searching method with periodic spin-polarized density functional theory calculations, we investigated the reaction pathways of a acetaldehyde decomposition on the clean Mo(110) surface. Two reaction pathways were identified, a selective deoxygenation and a nonselective fragmentation pathways. We found that acetaldehyde preferentially adsorbs at the pseudo 3-fold hollow site in the η2(C,O) configuration on Mo(110). Among four possible bond (β-C-H, γ-C-H, C-O and C-C) cleavages, the initial decomposition of the adsorbed acetaldehyde produces either ethylidene via the C-Omore » bond scission or acetyl via the β-C-H bond scission while the C-C and the γ-C-H bond cleavages of acetaldehyde leading to the formation of methyl (and formyl) and formylmethyl are unlikely. Further dehydrogenations of ethylidene into either ethylidyne or vinyl are competing and very facile with low activation barriers of 0.24 and 0.31 eV, respectively. Concurrently, the formed acetyl would deoxygenate into ethylidyne via the C-O cleavage rather than breaking the C-C or the C-H bonds. The selective deoxygenation of acetaldehyde forming ethylene is inhibited by relatively weaker hydrogenation capability of the Mo(110) surface. Instead, the nonselective pathway via vinyl and vinylidene dehydrogenations to ethynyl as the final hydrocarbon fragment is kinetically favorable. On the other hand, the strong interaction between ethylene and the Mo(110) surface also leads to ethylene decomposition instead of desorption into the gas phase. This work was financially supported by the National Advanced Biofuels Consortium (NABC). Computing time was granted by a user project (emsl42292) at the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). This work was financially supported by the National Advanced Biofuels Consortium (NABC). Computing time was granted by a user project (emsl42292) at the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). The EMSL is a U.S. Department of Energy (DOE) national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and supported by the DOE Office of Biological and Environmental Research. Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy.« less
Development and applications of nondestructive evaluation at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Whitaker, Ann F.
1990-01-01
A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.
Honey Lake Power Facility under construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1988-12-01
Geothermal energy and wood waste are primary energy sources for the 30 megawatt, net, Honey Lake Power Facility, a cogeneration power plant. The facility 60% completed in January 1989, will use 1,300 tons per day of fuel obtained from selective forest thinnings and from logging residue combined with mill wastes. The power plant will be the largest industrial facility to use some of Lassen County's geothermal resources. The facility will produce 236 million kilowatt-hours of electricity annually. The plant consists of a wood-fired traveling grate furnace with a utility-type high pressure boiler. Fluids from a geothermal well will pass throughmore » a heat exchange to preheat boiler feedwater. Used geothermal fluid will be disposed of in an injection well. Steam will be converted to electrical power through a 35.5-megawatt turbine generator and transmitted 22 miles to Susanville over company-owned and maintained transmission lines. The plant includes pollution control for particulate removal, ammonia injection for removal of nitrogen oxides, and computer-controlled combustion systems to control carbon monoxide and hydrocarbons. The highly automated wood yard consists of systems to remove metal, handle oversized material, receive up to six truck loads of wood products per hour, and continuously deliver 58 tons per hour of fuel through redundant systems to ensure maximum on-line performance. The plant is scheduled to become operational in mid-1989.« less
ORNL Sustainable Campus Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halford, Christopher K
2012-01-01
The research conducted at Oak Ridge National Laboratory (ORNL) spans many disciplines and has the potential for far-reaching impact in many areas of everyday life. ORNL researchers and operations staff work on projects in areas as diverse as nuclear power generation, transportation, materials science, computing, and building technologies. As the U.S. Department of Energy s (DOE) largest science and energy research facility, ORNL seeks to establish partnerships with industry in the development of innovative new technologies. The primary focus of this current research deals with developing technologies which improve or maintain the quality of life for humans while reducing themore » overall impact on the environment. In its interactions with industry, ORNL serves as both a facility for sustainable research, as well as a representative of DOE to the private sector. For these reasons it is important that the everyday operations of the Laboratory reflect a dedication to the concepts of stewardship and sustainability.« less
Simulations of Laboratory Astrophysics Experiments using the CRASH code
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Kuranz, Carolyn; Manuel, Mario; Keiter, Paul; Drake, R. P.
2014-10-01
Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, imploding bubbles, and interacting jet experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via Grant DEFC52-08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0001840, and by the National Laser User Facility Program, Grant Number DE-NA0000850.
Testing activities at the National Battery Test Laboratory
NASA Astrophysics Data System (ADS)
Hornstra, F.; Deluca, W. H.; Mulcahey, T. P.
The National Battery Test Laboratory (NBTL) is an Argonne National Laboratory facility for testing, evaluating, and studying advanced electric storage batteries. The facility tests batteries developed under Department of Energy programs and from private industry. These include batteries intended for future electric vehicle (EV) propulsion, electric utility load leveling (LL), and solar energy storage. Since becoming operational, the NBTL has evaluated well over 1400 cells (generally in the form of three- to six-cell modules, but up to 140-cell batteries) of various technologies. Performance characterization assessments are conducted under a series of charge/discharge cycles with constant current, constant power, peak power, and computer simulated dynamic load profile conditions. Flexible charging algorithms are provided to accommodate the specific needs of each battery under test. Special studies are conducted to explore and optimize charge procedures, to investigate the impact of unique load demands on battery performance, and to analyze the thermal management requirements of battery systems.
A preliminary design study for a cosmic X-ray spectrometer
NASA Technical Reports Server (NTRS)
1972-01-01
The results are described of theoretical and experimental investigations aimed at the development of a curved crystal cosmic X-ray spectrometer to be used at the focal plane of the large orbiting X-ray telescope on the third High Energy Astronomical Observatory. The effort was concentrated on the development of spectrometer concepts and their evaluation by theoretical analysis, computer simulation, and laboratory testing with breadboard arrangements of crystals and detectors. In addition, a computer-controlled facility for precision testing and evaluation of crystals in air and vacuum was constructed. A summary of research objectives and results is included.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu
We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energymore » resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.« less
The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuess, S.; Garzoglio, G.; Holzman, B.
The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a commonmore » interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.« less
The Ames Power Monitoring System
NASA Technical Reports Server (NTRS)
Osetinsky, Leonid; Wang, David
2003-01-01
The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.
Dehydration of 1-octadecanol over H-BEA: A combined experimental and computational study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Wenji; Liu, Yuanshuai; Barath, Eszter
Liquid phase dehydration of 1-octdecanol, which is intermediately formed during the hydrodeoxygenation of microalgae oil, has been explored in a combined experimental and computational study. The alkyl chain of C18 alcohol interacts with acid sites during diffusion inside the zeolite pores, resulting in an inefficient utilization of the Brønsted acid sites for samples with high acid site concentrations. The parallel intra- and inter- molecular dehydration pathways having different activation energies pass through alternative reaction intermediates. Formation of surface-bound alkoxide species is the rate-limiting step during intramolecular dehydration, whereas intermolecular dehydration proceeds via a bulky dimer intermediate. Octadecene is the primarymore » dehydration product over H-BEA at 533 K. Despite of the main contribution of Brønsted acid sites towards both dehydration pathways, Lewis acid sites are also active in the formation of dioctadecyl ether. The intramolecular dehydration to octadecene and cleavage of the intermediately formed ether, however, require strong BAS. L. Wang, D. Mei and J. A. Lercher, acknowledge the partial support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less
Take a Tour of Our Facility | Energy Systems Integration Facility | NREL
Take a Tour of Our Facility Take a Tour of Our Facility The Energy Systems Integration Facility Optical Characterization Laboratory System Performance Laboratory Power Systems Integration Laboratory Control Room Energy Storage Laboratory Outdoor Testing Areas Outdoor Testing Areas Energy Systems
New techniques in neutron data measurements above 30 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisowski, P.W.; Haight, R.C.
1991-01-01
Recent developments in experimental facilities have enabled new techniques for measurements of neutron interactions above 30 MeV. Foremost is the development of both monoenergetic and continuous neutron sources using accelerators in the medium energy region between 100 and 800 MeV. Measurements of the reaction products have been advanced by the continuous improvement in detector systems, electronics and computers. Corresponding developments in particle transport codes and in the theory of nuclear reactions at these energies have allowed more precise design of neutron sources, experimental shielding and detector response. As a result of these improvements, many new measurements are possible and themore » data base in this energy range is expanding quickly.« less
Measurement of formation cross-section of 99Mo from the 98Mo(n,γ) and 100Mo(n,2n) reactions.
Badwar, Sylvia; Ghosh, Reetuparna; Lawriniang, Bioletty M; Vansola, Vibha; Sheela, Y S; Naik, Haladhara; Naik, Yeshwant; Suryanarayana, Saraswatula V; Jyrwa, Betylda; Ganesan, Srinivasan
2017-11-01
The formation cross-section of medical isotope 99 Mo from the 98 Mo(n,γ) reaction at the neutron energy of 0.025eV and from the 100 Mo(n,2n) reaction at the neutron energies of 11.9 and 15.75MeV have been determined by using activation and off-line γ-ray spectrometric technique. The thermal neutron energy of 0.025eV was used from the reactor critical facility at BARC, Mumbai, whereas the average neutron energies of 11.9 and 15.75MeV were generated using 7 Li(p,n) reaction in the Pelletron facility at TIFR, Mumbai. The experimentally determined cross-sections were compared with the evaluated nuclear data libraries of ENDF/B-VII.1, CENDL-3.1, JENDL-4.0 and JEFF-3.2 and are found to be in close agreement. The 100 Mo(n,2n) 99 Mo reaction cross-sections were also calculated theoretically by using TALYS-1.8 and EMPIRE-3.2 computer codes and compared with the experimental data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Self-similarity in high Atwood number Rayleigh-Taylor experiments
NASA Astrophysics Data System (ADS)
Mikhaeil, Mark; Suchandra, Prasoon; Pathikonda, Gokul; Ranjan, Devesh
2017-11-01
Self-similarity is a critical concept in turbulent and mixing flows. In the Rayleigh-Taylor instability, theory and simulations have shown that the flow exhibits properties of self-similarity as the mixing Reynolds number exceeds 20000 and the flow enters the turbulent regime. Here, we present results from the first large Atwood number (0.7) Rayleigh-Taylor experimental campaign for mixing Reynolds number beyond 20000 in an effort to characterize the self-similar nature of the instability. Experiments are performed in a statistically steady gas tunnel facility, allowing for the evaluation of turbulence statistics. A visualization diagnostic is used to study the evolution of the mixing width as the instability grows. This allows for computation of the instability growth rate. For the first time in such a facility, stereoscopic particle image velocimetry is used to resolve three-component velocity information in a plane. Velocity means, fluctuations, and correlations are considered as well as their appropriate scaling. Probability density functions of velocity fields, energy spectra, and higher-order statistics are also presented. The energy budget of the flow is described, including the ratio of the kinetic energy to the released potential energy. This work was supported by the DOE-NNSA SSAA Grant DE-NA0002922.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.
1987-03-01
This document reviews research accomplishments achieved by the staff of the Center for Engineering Systems Advanced Research (CESAR) during the fiscal years 1984 through 1987. The manuscript also describes future CESAR objectives for the 1988-1991 planning horizon, and beyond. As much as possible, the basic research goals are derived from perceived Department of Energy (DOE) needs for increased safety, productivity, and competitiveness in the United States energy producing and consuming facilities. Research areas covered include the HERMIES-II Robot, autonomous robot navigation, hypercube computers, machine vision, and manipulators.
INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorensek, M.; Hamm, L.; Garcia, H.
2011-07-18
Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come frommore » many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.« less
Advanced human-machine interface for collaborative building control
Zheng, Xianjun S.; Song, Zhen; Chen, Yanzi; Zhang, Shaopeng; Lu, Yan
2015-08-11
A system for collaborative energy management and control in a building, including an energy management controller, one or more occupant HMIs that supports two-way communication between building occupants and a facility manager, and between building occupants and the energy management controller, and a facility manager HMI that supports two-way communication between the facility manager and the building occupants, and between the facility manager and the energy management controller, in which the occupant HMI allows building occupants to provide temperature preferences to the facility manager and the energy management controller, and the facility manager HMI allows the facility manager to configure an energy policy for the building as a set of rules and to view occupants' aggregated temperature preferences, and the energy management controller determines an optimum temperature range that resolves conflicting occupant temperature preferences and occupant temperature preferences that conflict with the facility manager's energy policy for the building.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
18 CFR 292.205 - Criteria for qualifying cogeneration facilities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... standard. For any topping-cycle cogeneration facility, the useful thermal energy output of the facility... thermal energy output, during the 12-month period beginning with the date the facility first produces... total energy input of natural gas and oil to the facility; or (B) If the useful thermal energy output is...
A radiation and energy budget algorithm for forest canopies
NASA Astrophysics Data System (ADS)
Tunick, A.
2006-01-01
Previously, it was shown that a one-dimensional, physics-based (conservation-law) computer model can provide a useful mathematical representation of the wind flow, temperatures, and turbulence inside and above a uniform forest stand. A key element of this calculation was a radiation and energy budget algorithm (implemented to predict the heat source). However, to keep the earlier publication brief, a full description of the radiation and energy budget algorithm was not given. Hence, this paper presents our equation set for calculating the incoming total radiation at the canopy top as well as the transmission, reflection, absorption, and emission of the solar flux through a forest stand. In addition, example model output is presented from three interesting numerical experiments, which were conducted to simulate the canopy microclimate for a forest stand that borders the Blossom Point Field Test Facility (located near La Plata, Maryland along the Potomac River). It is anticipated that the current numerical study will be useful to researchers and experimental planners who will be collecting acoustic and meteorological data at the Blossom Point Facility in the near future.
A geophysical shock and air blast simulator at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, K. B.; Brown, C. G.; May, M. J.
2014-09-15
The energy partitioning energy coupling experiments at the National Ignition Facility (NIF) have been designed to measure simultaneously the coupling of energy from a laser-driven target into both ground shock and air blast overpressure to nearby media. The source target for the experiment is positioned at a known height above the ground-surface simulant and is heated by four beams from the NIF. The resulting target energy density and specific energy are equal to those of a low-yield nuclear device. The ground-shock stress waves and atmospheric overpressure waveforms that result in our test system are hydrodynamically scaled analogs of full-scale seismicmore » and air blast phenomena. This report summarizes the development of the platform, the simulations, and calculations that underpin the physics measurements that are being made, and finally the data that were measured. Agreement between the data and simulation of the order of a factor of two to three is seen for air blast quantities such as peak overpressure. Historical underground test data for seismic phenomena measured sensor displacements; we measure the stresses generated in our ground-surrogate medium. We find factors-of-a-few agreement between our measured peak stresses and predictions with modern geophysical computer codes.« less
A geophysical shock and air blast simulator at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, K. B.; Brown, C. G.; May, M. J.
2014-09-01
The energy partitioning energy coupling experiments at the National Ignition Facility (NIF) have been designed to measure simultaneously the coupling of energy from a laser-driven target into both ground shock and air blast overpressure to nearby media. The source target for the experiment is positioned at a known height above the ground-surface simulant and is heated by four beams from the NIF. The resulting target energy density and specific energy are equal to those of a low-yield nuclear device. The ground-shock stress waves and atmospheric overpressure waveforms that result in our test system are hydrodynamically scaled analogs of full-scale seismicmore » and air blast phenomena. This report summarizes the development of the platform, the simulations, and calculations that underpin the physics measurements that are being made, and finally the data that were measured. Agreement between the data and simulation of the order of a factor of two to three is seen for air blast quantities such as peak overpressure. Historical underground test data for seismic phenomena measured sensor displacements; we measure the stresses generated in our ground-surrogate medium. We find factors-of-a-few agreement between our measured peak stresses and predictions with modern geophysical computer codes.« less
Central Computational Facility CCF communications subsystem options
NASA Technical Reports Server (NTRS)
Hennigan, K. B.
1979-01-01
A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.
Academic Computing Facilities and Services in Higher Education--A Survey.
ERIC Educational Resources Information Center
Warlick, Charles H.
1986-01-01
Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…
Natural phenomena hazards design and evaluation criteria for Department of Energy Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-01-01
The Department of Energy (DOE) has issued an Order 420.1 which establishes policy for its facilities in the event of natural phenomena hazards (NPH) along with associated NPH mitigation requirements. This DOE Standard gives design and evaluation criteria for NPH effects as guidance for implementing the NPH mitigation requirements of DOE Order 420.1 and the associated implementation Guides. These are intended to be consistent design and evaluation criteria for protection against natural phenomena hazards at DOE sites throughout the United States. The goal of these criteria is to assure that DOE facilities can withstand the effects of natural phenomena suchmore » as earthquakes, extreme winds, tornadoes, and flooding. These criteria apply to the design of new facilities and the evaluation of existing facilities. They may also be used for modification and upgrading of existing facilities as appropriate. The design and evaluation criteria presented herein control the level of conservatism introduced in the design/evaluation process such that earthquake, wind, and flood hazards are treated on a consistent basis. These criteria also employ a graded approach to ensure that the level of conservatism and rigor in design/evaluation is appropriate for facility characteristics such as importance, hazards to people on and off site, and threat to the environment. For each natural phenomena hazard covered, these criteria consist of the following: Performance Categories and target performance goals as specified in the DOE Order 420.1 NPH Implementation Guide, and DOE-STD-1 021; specified probability levels from which natural phenomena hazard loading on structures, equipment, and systems is developed; and design and evaluation procedures to evaluate response to NPH loads and criteria to assess whether or not computed response is permissible.« less
National Synchrotron Light Source annual report 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulbert, S.L.; Lazarz, N.M.
1992-04-01
This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less
Hanford Site Composite Analysis Technical Approach Description: Integrated Computational Framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, K. J.
2017-09-14
The U.S. Department of Energy (DOE) in DOE O 435.1 Chg. 1, Radioactive Waste Management, requires the preparation and maintenance of a composite analysis (CA). The primary purpose of the CA is to provide a reasonable expectation that the primary public dose limit is not likely to be exceeded by multiple source terms that may significantly interact with plumes originating at a low-level waste disposal facility. The CA is used to facilitate planning and land use decisions that help assure disposal facility authorization will not result in long-term compliance problems; or, to determine management alternatives, corrective actions, or assessment needsmore » if potential problems are identified.« less
Large Scale Computing and Storage Requirements for High Energy Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard A.; Wasserman, Harvey
2010-11-24
The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
Evaluation of radiological dispersion/consequence codes supporting DOE nuclear facility SARs
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Kula, K.R.; Paik, I.K.; Chung, D.Y.
1996-12-31
Since the early 1990s, the authorization basis documentation of many U.S. Department of Energy (DOE) nuclear facilities has been upgraded to comply with DOE orders and standards. In this process, many safety analyses have been revised. Unfortunately, there has been nonuniform application of software, and the most appropriate computer and engineering methodologies often are not applied. A DOE Accident Phenomenology and Consequence (APAC) Methodology Evaluation Program was originated at the request of DOE Defense Programs to evaluate the safety analysis methodologies used in nuclear facility authorization basis documentation and to define future cost-effective support and development initiatives. Six areas, includingmore » source term development (fire, spills, and explosion analysis), in-facility transport, and dispersion/ consequence analysis (chemical and radiological) are contained in the APAC program. The evaluation process, codes considered, key results, and recommendations for future model and software development of the Radiological Dispersion/Consequence Working Group are summarized in this paper.« less
Specialized computer architectures for computational aerodynamics
NASA Technical Reports Server (NTRS)
Stevenson, D. K.
1978-01-01
In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheppy, M.; Lobato, C.; Van Geet, O.
2011-12-01
This publication detailing the design, implementation strategies, and continuous performance monitoring of NREL's Research Support Facility data center. Data centers are energy-intensive spaces that facilitate the transmission, receipt, processing, and storage of digital data. These spaces require redundancies in power and storage, as well as infrastructure, to cool computing equipment and manage the resulting waste heat (Tschudi, Xu, Sartor, and Stein, 2003). Data center spaces can consume more than 100 times the energy of standard office spaces (VanGeet 2011). The U.S. Environmental Protection Agency (EPA) reported that data centers used 61 billion kilowatt-hours (kWh) in 2006, which was 1.5% ofmore » the total electricity consumption in the U.S. (U.S. EPA, 2007). Worldwide, data centers now consume more energy annually than Sweden (New York Times, 2009). Given their high energy consumption and conventional operation practices, there is a potential for huge energy savings in data centers. The National Renewable Energy Laboratory (NREL) is world renowned for its commitment to green building construction. In June 2010, the laboratory finished construction of a 220,000-square-foot (ft{sup 2}), LEED Platinum, Research Support Facility (RSF), which included a 1,900-ft{sup 2} data center. The RSF will expand to 360,000 ft{sup 2} with the opening of an additional wing December, 2011. The project's request for proposals (RFP) set a whole-building demand-side energy use requirement of a nominal 35 kBtu/ft{sup 2} per year. On-site renewable energy generation will offset the annual energy consumption. To support the RSF's energy goals, NREL's new data center was designed to minimize its energy footprint without compromising service quality. Several implementation challenges emerged during the design, construction, and first 11 months of operation of the RSF data center. This document highlights these challenges and describes in detail how NREL successfully overcame them. The IT settings and strategies outlined in this document have been used to significantly reduce data center energy requirements in the RSF; however, these can also be used in existing buildings and retrofits.« less
Active Flow Control in an Aggressive Transonic Diffuser
NASA Astrophysics Data System (ADS)
Skinner, Ryan W.; Jansen, Kenneth E.
2017-11-01
A diffuser exchanges upstream kinetic energy for higher downstream static pressure by increasing duct cross-sectional area. The resulting stream-wise and span-wise pressure gradients promote extensive separation in many diffuser configurations. The present computational work evaluates active flow control strategies for separation control in an asymmetric, aggressive diffuser of rectangular cross-section at inlet Mach 0.7 and Re 2.19M. Corner suction is used to suppress secondary flows, and steady/unsteady tangential blowing controls separation on both the single ramped face and the opposite flat face. We explore results from both Spalart-Allmaras RANS and DDES turbulence modeling frameworks; the former is found to miss key physics of the flow control mechanisms. Simulated baseline, steady, and unsteady blowing performance is validated against experimental data. Funding was provided by Northrop Grumman Corporation, and this research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
NASA Technical Reports Server (NTRS)
Alley, C. O.; Rayner, J. D.; Steggerda, C. A.; Mullendore, J. V.; Small, L.; Wagner, S.
1983-01-01
A horizontal two-way time comparison link in air between the University of Maryland laser ranging and time transfer equipment at the Goddard Optical Research Facility (GORF) 1.2 m telescope and the Time Services Division of the U.S. Naval Observatory (USNO) was established. Flat mirrors of 25 cm and 30 cm diameter respectively were placed on top of the Washington Cathedral and on a water tower at the Beltsville Agricultural Research Center. Two optical corner reflectors at the USNO reflect the laser pulses back to the GORF. Light pulses of 100 ps duration and an energy of several hundred microjoules are sent at the rate of 10 pulses per second. The detection at the USNO is by means of an RCA C30902E avalanche photodiode and the timing is accomplished by an HP 5370A computing counter and an HP 1000 computer with respect to a 10 pps pulse train from the Master Clock.
ERIC Educational Resources Information Center
Siu, Kin Wai Michael; Lam, Mei Seung
2012-01-01
Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…
Energy and Educational Facilities: Costs and Conservation.
ERIC Educational Resources Information Center
Educational Facilities Labs., Inc., New York, NY.
An analysis of energy costs and conservation in educational facilities in the United States is presented in this report. Tables and text give dollar figures for energy expenditures in education since the first oil embargo. Energy conservation through facilities management and through facilities modification is stressed. Recommendations are…
15 CFR 923.13 - Energy facility planning process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Energy facility planning process. 923... RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Uses Subject to Management § 923.13 Energy facility planning process. The management program must contain a planning process for energy facilities...
15 CFR 923.13 - Energy facility planning process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Energy facility planning process. 923... RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Uses Subject to Management § 923.13 Energy facility planning process. The management program must contain a planning process for energy facilities...
15 CFR 923.13 - Energy facility planning process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Energy facility planning process. 923... RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Uses Subject to Management § 923.13 Energy facility planning process. The management program must contain a planning process for energy facilities...
15 CFR 923.13 - Energy facility planning process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Energy facility planning process. 923... RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Uses Subject to Management § 923.13 Energy facility planning process. The management program must contain a planning process for energy facilities...
15 CFR 923.13 - Energy facility planning process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Energy facility planning process. 923... RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Uses Subject to Management § 923.13 Energy facility planning process. The management program must contain a planning process for energy facilities...
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, Charles; Bell, Greg; Canon, Shane
The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SCmore » organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.« less
ASCR/HEP Exascale Requirements Review Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
ASCR/HEP Exascale Requirements Review Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; et al.
2016-03-30
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
2012-06-16
Engineers to help identify and develop energy and water conservation projects in the facilities for which they are responsible. DISCLAIMER: The...and water throughout their facility. To identify energy and water conservation measures (ECMs), an energy manager would generally start by performing...an Energy and Water Conservation Assessment, essentially a facility-level evaluation of the en- ergy and water consuming equipment and systems that
10 CFR 451.4 - What is a qualified renewable energy facility.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false What is a qualified renewable energy facility. 451.4 Section 451.4 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.4 What is a qualified renewable energy facility. In order to qualify for an incentive payment under...
10 CFR 451.4 - What is a qualified renewable energy facility.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false What is a qualified renewable energy facility. 451.4 Section 451.4 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.4 What is a qualified renewable energy facility. In order to qualify for an incentive payment under...
10 CFR 451.4 - What is a qualified renewable energy facility.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false What is a qualified renewable energy facility. 451.4 Section 451.4 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.4 What is a qualified renewable energy facility. In order to qualify for an incentive payment under...
10 CFR 451.4 - What is a qualified renewable energy facility.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false What is a qualified renewable energy facility. 451.4 Section 451.4 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.4 What is a qualified renewable energy facility. In order to qualify for an incentive payment under...
10 CFR 451.4 - What is a qualified renewable energy facility.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false What is a qualified renewable energy facility. 451.4 Section 451.4 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.4 What is a qualified renewable energy facility. In order to qualify for an incentive payment under...
CT Scanning and Geophysical Measurements of the Marcellus Formation from the Tippens 6HS Well
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crandall, Dustin; Paronish, Thomas; Brown, Sarah
The computed tomography (CT) facilities and the Multi-Sensor Core Logger (MSCL) at the National Energy Technology Laboratory (NETL) Morgantown, West Virginia site were used to characterize core of the Marcellus Shale from a vertical well drilled in Eastern Ohio. The core is from the Tippens 6HS Well in Monroe County, Ohio and is comprised primarily of the Marcellus Shale from depths of 5550 to 5663 ft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, G.C.; Stevens, P.R.; Rittenberg, A.
A compilation is presented of reaction data taken from experimental high energy physics journal articles, reports, preprints, theses, and other sources. Listings of all the data are given, and the data points are indexed by reaction and momentum, as well as by their source document. Much of the original compilation was done by others working in the field. The data presented also exist in the form of a computer-readable and searchable database; primitive access facilities for this database are available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munro, J.K. Jr.
1980-05-01
The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone whomore » wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables.« less
18 CFR 292.204 - Criteria for qualifying small power production facilities.
Code of Federal Regulations, 2011 CFR
2011-04-01
... FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY REGULATORY... production capacity of any other small power production facilities that use the same energy resource, are... production facilities within one mile of such facilities. (b) Fuel use. (1)(i) The primary energy source of...
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
LLNL NESHAPs 2015 Annual Report - June 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, K. R.; Gallegos, G. M.; MacQueen, D. H.
2016-06-01
Lawrence Livermore National Security, LLC operates facilities at Lawrence Livermore National Laboratory (LLNL) in which radionuclides are handled and stored. These facilities are subject to the U.S. Environmental Protection Agency (EPA) National Emission Standards for Hazardous Air Pollutants (NESHAPs) in Code of Federal Regulations (CFR) Title 40, Part 61, Subpart H, which regulates radionuclide emissions to air from Department of Energy (DOE) facilities. Specifically, NESHAPs limits the emission of radionuclides to the ambient air to levels resulting in an annual effective dose equivalent of 10 mrem (100 μSv) to any member of the public. Using measured and calculated emissions, andmore » building-specific and common parameters, LLNL personnel applied the EPA-approved computer code, CAP88-PC, Version 4.0.1.17, to calculate the dose to the maximally exposed individual member of the public for the Livermore Site and Site 300.« less
NASA Astrophysics Data System (ADS)
Recommended priorities for astronomy and astrophysics in the 1980s are considered along with the frontiers of astrophysics, taking into account large-scale structure in the universe, the evolution of galaxies, violent events, the formation of stars and planets, solar and stellar activity, astronomy and the forces of nature, and planets, life, and intelligence. Approved, continuing, and previously recommended programs are related to the Space Telescope and the associated Space Telescope Science Institute, second-generation instrumentation for the Space Telescope, and Gamma Ray Observatory, facilities for the detection of solar neutrinos, and the Shuttle Infrared Telescope Facility. Attention is given to the prerequisites for new research initiatives, new programs, programs for study and development, high-energy astrophysics, radio astronomy, theoretical and laboratory astrophysics, data processing and computational facilities, organization and education, and ultraviolet, optical, and infrared astronomy.
NASA Astrophysics Data System (ADS)
Nora, R.; Field, J. E.; Peterson, J. Luc; Spears, B.; Kruse, M.; Humbird, K.; Gaffney, J.; Springer, P. T.; Brandon, S.; Langer, S.
2017-10-01
We present an experimentally corroborated hydrodynamic extrapolation of several recent BigFoot implosions on the National Ignition Facility. An estimate on the value and error of the hydrodynamic scale necessary for ignition (for each individual BigFoot implosion) is found by hydrodynamically scaling a distribution of multi-dimensional HYDRA simulations whose outputs correspond to their experimental observables. The 11-parameter database of simulations, which include arbitrary drive asymmetries, dopant fractions, hydrodynamic scaling parameters, and surface perturbations due to surrogate tent and fill-tube engineering features, was computed on the TRINITY supercomputer at Los Alamos National Laboratory. This simple extrapolation is the first step in providing a rigorous calibration of our workflow to provide an accurate estimate of the efficacy of achieving ignition on the National Ignition Facility. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Flying a College on the Computer. The Use of the Computer in Planning Buildings.
ERIC Educational Resources Information Center
Saint Louis Community Coll., MO.
Upon establishment of the St. Louis Junior College District, it was decided to make use of computer si"ulation facilities of a nearby aero-space contractor to develop a master schedule for facility planning purposes. Projected enrollments and course offerings were programmed with idealized student-teacher ratios to project facility needs. In…
NASA Astrophysics Data System (ADS)
Xu, Jun
Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the load, maximize its profit, and manage risks. In this topic, a mid-term power portfolio optimization problem with risk management is presented. Key instruments are considered, risk terms based on semi-variances of spot market transactions are introduced, and penalties on load obligation violations are added to the objective function to improve algorithm convergence and constraint satisfaction. To overcome the inseparability of the resulting problem, a surrogate optimization framework is developed enabling a decomposition and coordination approach. Numerical testing results show that our method effectively provides decisions for various instruments to maximize profit, manage risks, and is computationally efficient.
Ammonia Oxidation by Abstraction of Three Hydrogen Atoms from a Mo–NH 3 Complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Papri; Heiden, Zachariah M.; Wiedner, Eric S.
We report ammonia oxidation by homolytic cleavage of all three H atoms from a Mo-15NH3 complex using the 2,4,6-tri-tert-butylphenoxyl radical to afford a Mo-alkylimido (Mo=15NR) complex (R = 2,4,6-tri-t-butylcyclohexa-2,5-dien-1-one). Reductive cleavage of Mo=15NR generates a terminal Mo≡N nitride, and a [Mo-15NH]+ complex is formed by protonation. Computational analysis describes the energetic profile for the stepwise removal of three H atoms from the Mo-15NH3 complex and the formation of Mo=15NR. Acknowledgment. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Re-search Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Officemore » of Basic Energy Sciences. EPR and mass spectrometry experiments were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at PNNL. The authors thank Dr. Eric D. Walter and Dr. Rosalie Chu for assistance in performing EPR and mass spectroscopy analysis, respectively. Computational resources provided by the National Energy Re-search Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific North-west National Laboratory is operated by Battelle for the U.S. DOE.« less
High Energy Astronomical Data Processing and Analysis via the Internet
NASA Astrophysics Data System (ADS)
Valencic, Lynne A.; Snowden, S.; Pence, W.
2012-01-01
The HEASARC at NASA Goddard Space Flight Center and the US XMM-Newton GOF has developed Hera, a data processing facility for analyzing high energy astronomical data over the internet. Hera provides all the disk space and computing resources needed to do general processing of and advanced research on publicly available data from High Energy Astrophysics missions. The data and data products are kept on a server at GSFC and can be downloaded to a user's local machine. Further, the XMM-GOF has developed scripts to streamline XMM data reduction. These are available through Hera, and can also be downloaded to a user's local machine. These are free services provided to students, educators, and researchers for educational and research purposes.
Energy Systems Integration Facility (ESIF) Facility Stewardship Plan: Revision 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torres, Juan; Anderson, Art
The U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), has established the Energy Systems Integration Facility (ESIF) on the campus of the National Renewable Energy Laboratory (NREL) and has designated it as a DOE user facility. This 182,500-ft2 research facility provides state-of-the-art laboratory and support infrastructure to optimize the design and performance of electrical, thermal, fuel, and information technologies and systems at scale. This Facility Stewardship Plan provides DOE and other decision makers with information about the existing and expected capabilities of the ESIF and the expected performance metrics to be applied to ESIF operations.more » This plan is a living document that will be updated and refined throughout the lifetime of the facility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, A.
2001-05-16
Greening Federal Facilities, Second Edition, is a nuts-and-bolts resource guide compiled to increase energy and resource efficiency, cut waste, and improve the performance of Federal buildings and facilities. The guide highlights practical actions that facility managers, design and construction staff, procurement officials, and facility planners can take to save energy and money, improve the comfort and productivity of employees, and benefit the environment. It supports a national effort to promote energy and environmental efficiency in the nation's 500,000 Federal buildings and facilities. Topics covered include current Federal regulations; environmental and energy decision-making; site and landscape issues; building design; energy systems;more » water and wastewater; materials; waste management, and recycling; indoor environmental quality; and managing buildings.« less
Heavy ion linear accelerator for radiation damage studies of materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.
A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response ofmore » the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for U-238(50+) and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.« less
Heavy ion linear accelerator for radiation damage studies of materials
NASA Astrophysics Data System (ADS)
Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.; Nolen, Jerry; Barcikowski, Albert; Pellin, Michael; Yacout, Abdellatif
2017-03-01
A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response of the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for 238U50+ and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Analyzing industrial energy use through ordinary least squares regression models
NASA Astrophysics Data System (ADS)
Golden, Allyson Katherine
Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and production behavior, and identify opportunities for energy and cost savings. This thesis study also utilizes change-point and degree-day baseline energy models to disaggregate facility annual energy consumption into separate industrial end-user categories. The baseline energy model provides a suitable and economical alternative to sub-metering individual manufacturing equipment. One case study describes the conjoined use of baseline energy models and facility information gathered during a one-day onsite visit to perform an end-point energy analysis of an injection molding facility conducted by the Alabama Industrial Assessment Center. Applying baseline regression model results to the end-point energy analysis allowed the AIAC to better approximate the annual energy consumption of the facility's HVAC system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-03-17
This report summarizes all work for the Energy Survey of Army Industrial Facilities, Energy Engineering Analysis Program (EEAP) at the Western Area Demilitarization Facility (WADF) of the Hawthorne Army Ammunition Plant (HWAAP), Hawthorne, Nevada, authorized under Contract No. DACA05-92-C-0155 with the U.S. Army Corps of Engineers, Sacramento District, California. The purpose of this energy survey is to develop a set of projects and actions that will reduce energy consumption and operating costs of selected facilities at the WADF. A preliminary inspection of facilities at WADF by Keller Gannon that identified potential retrofit opportunities was submitted as the EEAP Study andmore » Criteria Review in December 1993. This document formed the basis of the Detailed Scope of Work for this study. Facilities included in the survey and study, together with operational status.« less
Acid/base equilibria in clusters and their role in proton exchange membranes: Computational insight
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glezakou, Vanda A; Dupuis, Michel; Mundy, Christopher J
2007-10-24
We describe molecular orbital theory and ab initio molecular dynamics studies of acid/base equilibria of clusters AH:(H 2O) n↔A -:H +(H 2O) n in low hydration regime (n = 1-4), where AH is a model of perfluorinated sulfonic acids, RSO 3H (R = CF 3CF 2), encountered in polymeric electrolyte membranes of fuel cells. Free energy calculations on the neutral and ion pair structures for n = 3 indicate that the two configurations are close in energy and are accessible in the fluctuation dynamics of proton transport. For n = 1,2 the only relevant configuration is the neutral form. Thismore » was verified through ab initio metadynamics simulations. These findings suggest that bases are directly involved in the proton transport at low hydration levels. In addition, the gas phase proton affinity of the model sulfonic acid RSO 3H was found to be comparable to the proton affinity of water. Thus, protonated acids can also play a role in proton transport under low hydration conditions and under high concentration of protons. This work was supported by the Division of Chemical Science, Office of Basic Energy Sciences, US Department of Energy (DOE under Contract DE-AC05-76RL)1830. Computations were performed on computers of the Molecular Interactions and Transformations (MI&T) group and MSCF facility of EMSL, sponsored by US DOE and OBER located at PNNL. This work was benefited from resource of the National Energy Research Scientific Computing Centre, supported by the Office of Science of the US DOE, under Contract No. DE-AC03-76SF00098.« less
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; ...
2017-11-23
Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
NASA Astrophysics Data System (ADS)
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert
2017-10-01
The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo
Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less
Energy Systems Integration Facility Overview
Arvizu, Dan; Chistensen, Dana; Hannegan, Bryan; Garret, Bobi; Kroposki, Ben; Symko-Davies, Martha; Post, David; Hammond, Steve; Kutscher, Chuck; Wipke, Keith
2018-01-16
The U.S. Department of Energy's Energy Systems Integration Facility (ESIF) is located at the National Renewable Energy Laboratory is the right tool, at the right time... a first-of-its-kind facility that addresses the challenges of large-scale integration of clean energy technologies into the energy systems that power the nation.
Microscopic heavy-ion theory. Final Report. February 2014-June 2015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernst, David J.; Oberacker, Volker E.; Umar, A. Sait
The Vanderbilt nuclear theory group conducts research in the areas of low-energy nuclear reactions and in neutrino oscillations. Specically, we study dynamics of nuclear reactions microscopically, in particular for neutron-rich nuclei which will be accessible with current and future radioactive ion beam facilities. The neutrino work concentrates on constructing computational tools for analyzing neutrino oscillation data. The most important of these is the analysis of the Super K atmospheric data. Our research concentrates on the following topics which are part of the DOE Long-Range Plan: STUDIES OF LOW-ENERGY REACTIONS OF EXOTIC NUCLEI (Professors Umar and Oberacker), including sub-barrier fusion crossmore » sections, capture cross sections for superheavy element production, and nuclear astrophysics applications. Our theory project is strongly connected to experiments at RIB facilities around the world, including NSCL-FRIB (MSU) and ATLAS-CARIBU (Argonne). PHENOMENOLOGY OF NEUTRINO OSCILLATIONS (Prof. Ernst), extracting information from existing neutrino oscillation experiments and proposing possible future experiments in order to better understand the oscillation phenomenon.« less
Recent skyshine calculations at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degtyarenko, P.
1997-12-01
New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less
New NREL Research Facility Slashes Energy Use by 66 Percent
Thermal Test Facility, which serves as a showcase of energy-saving features and the home of NREL's cutting technologies now being developed at the Thermal Test Facility will help us reach this goal." The facility energy-efficient building design, NREL's Thermal Test Facility houses sophisticated equipment for
Low energy ion beam dynamics of NANOGAN ECR ion source
NASA Astrophysics Data System (ADS)
Kumar, Sarvesh; Mandal, A.
2016-04-01
A new low energy ion beam facility (LEIBF) has been developed for providing the mass analyzed highly charged intense ion beams of energy ranging from a few tens of keV to a few MeV for atomic, molecular and materials sciences research. The new facility consists of an all permanent magnet 10 GHz electron cyclotron resonance (ECR) ion source (NANOGAN) installed on a high voltage platform (400 kV) which provides large currents of multiply charged ion beams. Higher emittance at low energy of intense ion beam puts a tremendous challenge to the beam optical design of this facility. The beam line consists of mainly the electrostatic quadrupoles, an accelerating section, analyzing cum switching magnet and suitable beam diagnostics including vacuum components. The accelerated ion beam is analyzed for a particular mass to charge (m/q) ratio as well as guided to three different lines along 75°, 90° and 105° using a large acceptance analyzing cum switching magnet. The details of transverse beam optics to all the beam lines with TRANSPORT and GICOSY beam optics codes are being described. Field computation code, OPERA 3D has been utilized to design the magnets and electrostatic quadrupoles. A theoretical estimation of emittance for optimized geometry of ion source is given so as to form the basis of beam optics calculations. The method of quadrupole scan of the beam is used to characterize the emittance of the final beam on the target. The measured beam emittance increases with m/q ratios of various ion beams similar to the trend observed theoretically.
NASA Technical Reports Server (NTRS)
Biggerstaff, J. A. (Editor)
1985-01-01
Topics related to physics instrumentation are discussed, taking into account cryostat and electronic development associated with multidetector spectrometer systems, the influence of materials and counting-rate effects on He-3 neutron spectrometry, a data acquisition system for time-resolved muscle experiments, and a sensitive null detector for precise measurements of integral linearity. Other subjects explored are concerned with space instrumentation, computer applications, detectors, instrumentation for high energy physics, instrumentation for nuclear medicine, environmental monitoring and health physics instrumentation, nuclear safeguards and reactor instrumentation, and a 1984 symposium on nuclear power systems. Attention is given to the application of multiprocessors to scientific problems, a large-scale computer facility for computational aerodynamics, a single-board 32-bit computer for the Fastbus, the integration of detector arrays and readout electronics on a single chip, and three-dimensional Monte Carlo simulation of the electron avalanche in a proportional counter.
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrail, B.P.; Mahoney, L.A.
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less
NASA Technical Reports Server (NTRS)
Hegarty, D. M.
1974-01-01
A guidance, navigation, and control system, the Simulated Shuttle Flight Test System (SS-FTS), when interfaced with existing aircraft systems, provides a research facility for studying concepts for landing the space shuttle orbiter and conventional jet aircraft. The SS-FTS, which includes a general-purpose computer, performs all computations for precisely following a prescribed approach trajectory while properly managing the vehicle energy to allow safe arrival at the runway and landing within prescribed dispersions. The system contains hardware and software provisions for navigation with several combinations of possible navigation aids that have been suggested for the shuttle. The SS-FTS can be reconfigured to study different guidance and navigation concepts by changing only the computer software, and adapted to receive different radio navigation information through minimum hardware changes. All control laws, logic, and mode interlocks reside solely in the computer software.
Big Data over a 100G network at Fermilab
Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; ...
2014-06-11
As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less
Big Data over a 100G network at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo
As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less
Energy efficiency in California laboratory-type facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, E.; Bell, G.; Sartor, D.
The central aim of this project is to provide knowledge and tools for increasing the energy efficiency and performance of new and existing laboratory-type facilities in California. We approach the task along three avenues: (1) identification of current energy use and savings potential, (2) development of a {ital Design guide for energy- Efficient Research Laboratories}, and (3) development of a research agenda for focused technology development and improving out understanding of the market. Laboratory-type facilities use a considerable amount of energy resources. They are also important to the local and state economy, and energy costs are a factor in themore » overall competitiveness of industries utilizing laboratory-type facilities. Although the potential for energy savings is considerable, improving energy efficiency in laboratory-type facilities is no easy task, and there are many formidable barriers to improving energy efficiency in these specialized facilities. Insufficient motivation for individual stake holders to invest in improving energy efficiency using existing technologies as well as conducting related R&D is indicative of the ``public goods`` nature of the opportunity to achieve energy savings in this sector. Due to demanding environmental control requirements and specialized processes, laboratory-type facilities epitomize the important intersection between energy demands in the buildings sector and the industrial sector. Moreover, given the high importance and value of the activities conducted in laboratory-type facilities, they represent one of the most powerful contexts in which energy efficiency improvements stand to yield abundant non-energy benefits if properly applied.« less
Student science enrichment training program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, S.S.
1994-08-01
This is a report on the Student Science Enrichment Training Program, with special emphasis on chemical and computer science fields. The residential summer session was held at the campus of Claflin College, Orangeburg, SC, for six weeks during 1993 summer, to run concomitantly with the college`s summer school. Fifty participants selected for this program, included high school sophomores, juniors and seniors. The students came from rural South Carolina and adjoining states which, presently, have limited science and computer science facilities. The program focused on high ability minority students, with high potential for science engineering and mathematical careers. The major objectivemore » was to increase the pool of well qualified college entering minority students who would elect to go into science, engineering and mathematical careers. The Division of Natural Sciences and Mathematics and engineering at Claflin College received major benefits from this program as it helped them to expand the Departments of Chemistry, Engineering, Mathematics and Computer Science as a result of additional enrollment. It also established an expanded pool of well qualified minority science and mathematics graduates, which were recruited by the federal agencies and private corporations, visiting Claflin College Campus. Department of Energy`s relationship with Claflin College increased the public awareness of energy related job opportunities in the public and private sectors.« less
Fire-protection research for energy technology: Fy 80 year end report
NASA Astrophysics Data System (ADS)
Hasegawa, H. K.; Alvares, N. J.; Lipska, A. E.; Ford, H.; Priante, S.; Beason, D. G.
1981-05-01
This continuing research program was initiated in order to advance fire protection strategies for Fusion Energy Experiments (FEE). The program expanded to encompass other forms of energy research. Accomplishments for fiscal year 1980 were: finalization of the fault-free analysis of the Shiva fire management system; development of a second-generation, fire-growth analysis using an alternate model and new LLNL combustion dynamics data; improvements of techniques for chemical smoke aerosol analysis; development and test of a simple method to assess the corrosive potential of smoke aerosols; development of an initial aerosol dilution system; completion of primary small-scale tests for measurements of the dynamics of cable fires; finalization of primary survey format for non-LLNL energy technology facilities; and studies of fire dynamics and aerosol production from electrical insulation and computer tape cassettes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E
In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereasmore » in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that show much better agreement with the measured values.« less
National Synchrotron Light Source annual report 1991. Volume 1, October 1, 1990--September 30, 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulbert, S.L.; Lazarz, N.M.
1992-04-01
This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less
Code of Federal Regulations, 2010 CFR
2010-07-01
... policy must Federal agencies follow in the management of facilities? 102-74.155 Section 102-74.155 Public... MANAGEMENT REGULATION REAL PROPERTY 74-FACILITY MANAGEMENT Facility Management Energy Conservation § 102-74.155 What energy conservation policy must Federal agencies follow in the management of facilities...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sotiropoulos, Fotis; Marr, Jeffrey D.G.; Milliren, Christopher
In January 2010, the University of Minnesota, along with academic and industry project partners, began work on a four year project to establish new facilities and research in strategic areas of wind energy necessary to move the nation towards a goal of 20% wind energy by 2030. The project was funded by the U.S. Department of Energy with funds made available through the American Recovery and Reinvestment Act of 2009. $7.9M of funds were provided by DOE and $3.1M was provided through matching funds. The project was organized into three Project Areas. Project Area 1 focused on design and developmentmore » of a utility scale wind energy research facility to support research and innovation. The project commissioned the Eolos Wind Research Field Station in November of 2011. The site, located 20 miles from St. Paul, MN operates a 2.5MW Clipper Liberty C-96 wind turbine, a 130-ft tall sensored meteorological tower and a robust sensor and data acquisition network. The site is operational and will continue to serve as a site for innovation in wind energy for the next 15 years. Project Areas 2 involved research on six distinct research projects critical to the 20% Wind Energy by 2030 goals. The research collaborations involved faculty from two universities, over nine industry partners and two national laboratories. Research outcomes include new knowledge, patents, journal articles, technology advancements, new computational models and establishment of new collaborative relationships between university and industry. Project Area 3 focused on developing educational opportunities in wind energy for engineering and science students. The primary outcome is establishment of a new graduate level course at the University of Minnesota called Wind Engineering Essentials. The seminar style course provides a comprehensive analysis of wind energy technology, economics, and operation. The course is highly successful and will continue to be offered at the University. The vision of U.S. DOE to establish unique, open-access research facilities and creation of university-industry research collaborations in wind energy were achieved through this project. The University of Minnesota, through the establishment of the Eolos Wind Energy Consortium and the Eolos Wind Research Field Station continue to develop new research collaborations with industry partners.« less
Simulation of the hohlraum for a laser facility of Megajoule scale
NASA Astrophysics Data System (ADS)
Chizhkov, M. N.; Kozmanov, M. Y. U.; Lebedev, S. N.; Lykov, V. A.; Rykovanova, V. V.; Seleznev, V. N.; Selezneva, K. I.; Stryakhnina, O. V.; Shestakov, A. A.; Vronskiy, A. V.
2010-08-01
2D calculations of the promising laser hohlraums were performed with using of the Sinara computer code. These hohlraums are intended for achievement of indirectly-driven thermonuclear ignition at laser energy above 1 MJ. Two calculation variants of the laser assembly with the form close to a rugby ball were carried out: with laser entrance hole shields and without shields. Time dependent hohlraum radiation temperature and x-ray flux asymmetry on a target were obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Office of The Director)
As a national laboratory Argonne concentrates on scientific and technological challenges that can only be addressed through a sustained, interdisciplinary focus at a national scale. Argonne's eight major initiatives, as enumerated in its strategic plan, are Hard X-ray Sciences, Leadership Computing, Materials and Molecular Design and Discovery, Energy Storage, Alternative Energy and Efficiency, Nuclear Energy, Biological and Environmental Systems, and National Security. The purposes of Argonne's Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel technical concepts, enhance the Laboratory's research and development (R and D) capabilities, and pursue its strategic goals. projects are selectedmore » from proposals for creative and innovative R and D studies that require advance exploration before they are considered to be sufficiently developed to obtain support through normal programmatic channels. Among the aims of the projects supported by the LDRD Program are the following: establishment of engineering proof of principle, assessment of design feasibility for prospective facilities, development of instrumentation or computational methods or systems, and discoveries in fundamental science and exploratory development.« less
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
Instrument Systems Analysis and Verification Facility (ISAVF) users guide
NASA Technical Reports Server (NTRS)
Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.
1985-01-01
The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.
ERIC Educational Resources Information Center
RENO, MARTIN; AND OTHERS
A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…
Options to improve energy efficiency for educational building
NASA Astrophysics Data System (ADS)
Jahan, Mafruha
The cost of energy is a major factor that must be considered for educational facility budget planning purpose. The analysis of energy related issues and options can be complex and requires significant time and detailed effort. One way to facilitate the inclusion of energy option planning in facility planning efforts is to utilize a tool that allows for quick appraisal of the facility energy profile. Once such an appraisal is accomplished, it is then possible to rank energy improvement options consistently with other facility needs and requirements. After an energy efficiency option has been determined to have meaningful value in comparison with other facility planning options, it is then possible to utilize the initial appraisal as the basis for an expanded consideration of additional facility and energy use detail using the same analytic system used for the initial appraisal. This thesis has developed a methodology and an associated analytic model to assist in these tasks and thereby improve the energy efficiency of educational facilities. A detailed energy efficiency and analysis tool is described that utilizes specific university building characteristics such as size, architecture, envelop, lighting, occupancy, thermal design which allows reducing the annual energy consumption. Improving the energy efficiency of various aspects of an educational building's energy performance can be complex and can require significant time and experience to make decisions. The approach developed in this thesis initially assesses the energy design for a university building. This initial appraisal is intended to assist administrators in assessing the potential value of energy efficiency options for their particular facility. Subsequently this scoping design can then be extended as another stage of the model by local facility or planning personnel to add more details and engineering aspects to the initial screening model. This approach can assist university planning efforts to identify the most cost effective combinations of energy efficiency strategies. The model analyzes and compares the payback periods of all proposed Energy Performance Measures (EPMs) to determine which has the greatest potential value.
Experience with a UNIX based batch computing facility for H1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.
1994-12-31
A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...
Monitoring agricultural processing electrical energy use and efficiency
USDA-ARS?s Scientific Manuscript database
Energy costs have become proportionately larger as cotton post-harvest processing facilities have utilized other inputs more efficiently. A discrepancy in energy consumption per unit processed between facilities suggests that energy could be utilized more efficiently. Cotton gin facilities were in...
Mohammadpour, Atefeh; Anumba, Chimay J; Messner, John I
2016-07-01
There is a growing focus on enhancing energy efficiency in healthcare facilities, many of which are decades old. Since replacement of all aging healthcare facilities is not economically feasible, the retrofitting of these facilities is an appropriate path, which also provides an opportunity to incorporate energy efficiency measures. In undertaking energy efficiency retrofits, it is vital that the safety of the patients in these facilities is maintained or enhanced. However, the interactions between patient safety and energy efficiency have not been adequately addressed to realize the full benefits of retrofitting healthcare facilities. To address this, an innovative integrated framework, the Patient Safety and Energy Efficiency (PATSiE) framework, was developed to simultaneously enhance patient safety and energy efficiency. The framework includes a step -: by -: step procedure for enhancing both patient safety and energy efficiency. It provides a structured overview of the different stages involved in retrofitting healthcare facilities and improves understanding of the intricacies associated with integrating patient safety improvements with energy efficiency enhancements. Evaluation of the PATSiE framework was conducted through focus groups with the key stakeholders in two case study healthcare facilities. The feedback from these stakeholders was generally positive, as they considered the framework useful and applicable to retrofit projects in the healthcare industry. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laros III, James H.; DeBonis, David; Grant, Ryan
Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover themore » entire software space, from generic hardware interfaces to the input from the computer facility manager.« less
Computational studies of solid-state alkali conduction in rechargeable alkali-ion batteries
Deng, Zhi; Mo, Yifei; Ong, Shyue Ping
2016-03-25
The facile conduction of alkali ions in a crystal host is of crucial importance in rechargeable alkali-ion batteries, the dominant form of energy storage today. In this review, we provide a comprehensive survey of computational approaches to study solid-state alkali diffusion. We demonstrate how these methods have provided useful insights into the design of materials that form the main components of a rechargeable alkali-ion battery, namely the electrodes, superionic conductor solid electrolytes and interfaces. We will also provide a perspective on future challenges and directions. Here, the scope of this review includes the monovalent lithium- and sodium-ion chemistries that aremore » currently of the most commercial interest.« less
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
10 CFR 1016.12 - Termination of security facility approval.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Termination of security facility approval. 1016.12 Section 1016.12 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.12 Termination of security facility approval. Security facility approval will be terminated...
10 CFR 1016.12 - Termination of security facility approval.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Termination of security facility approval. 1016.12 Section 1016.12 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.12 Termination of security facility approval. Security facility approval will be terminated...
Grid Facilities | Grid Modernization | NREL
groundbreaking innovations and collaboration in grid research. Photo of the Energy Systems Integration Facility Energy Systems Integration Facility The Energy Systems Integration Facility is the nation's premier user Located in Boulder, Colorado, the National Wind Technology Center (NWTC) offers similar integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crabtree, George; Glotzer, Sharon; McCurdy, Bill
This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. Newmore » materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed.« less
10 CFR 611.206 - Existing facilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Existing facilities. 611.206 Section 611.206 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS ADVANCED TECHNOLOGY VEHICLES MANUFACTURER ASSISTANCE PROGRAM Facility/Funding Awards § 611.206 Existing facilities. The Secretary shall, in making awards to those manufacturers that have existing...
10 CFR 611.206 - Existing facilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Existing facilities. 611.206 Section 611.206 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS ADVANCED TECHNOLOGY VEHICLES MANUFACTURER ASSISTANCE PROGRAM Facility/Funding Awards § 611.206 Existing facilities. The Secretary shall, in making awards to those manufacturers that have existing...
10 CFR 611.206 - Existing facilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Existing facilities. 611.206 Section 611.206 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS ADVANCED TECHNOLOGY VEHICLES MANUFACTURER ASSISTANCE PROGRAM Facility/Funding Awards § 611.206 Existing facilities. The Secretary shall, in making awards to those manufacturers that have existing...
10 CFR 611.206 - Existing facilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Existing facilities. 611.206 Section 611.206 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS ADVANCED TECHNOLOGY VEHICLES MANUFACTURER ASSISTANCE PROGRAM Facility/Funding Awards § 611.206 Existing facilities. The Secretary shall, in making awards to those manufacturers that have existing...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-10
...] Information Collection: Renewable Energy and Alternate Uses of Existing Facilities on the Outer Continental... requirements in the regulations under ``Renewable Energy and Alternate Uses of Existing Facilities on the Outer..., transportation, or transmission of energy from sources other than oil and gas (renewable energy). Specifically...
Capsule review of the DOE research and development and field facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1980-09-01
A description is given of the roles of DOE's headquarters, field offices, major multiprogram laboratories, Energy Technology and Mining Technology Centers, and other government-owned, contractor-operated facilities, which are located in all regions of the US. Descriptions of DOE facilities are given for multiprogram laboratories (12); program-dedicated facilities (biomedical and environmental facilities-12, fossil energy facilities-7, fusion energy facility-1, nuclear development facilities-3, physical research facilities-4, safeguards facility-1, and solar facilities-2); and Production, Testing, and Fabrication Facilities (nuclear materials production facilities-5, weapon testing and fabrication complex-8). Three appendices list DOE field and project offices; DOE field facilities by state or territory, names, addresses,more » and telephone numbers; DOE R and D field facilities by type, contractor names, and names of directors. (MCW)« less
Feasibility Investigation for a Solar Power Generation Facility
NASA Technical Reports Server (NTRS)
Nathan, Lakshmi
2010-01-01
The Energy Policy Act of 2005 states that by fiscal year 2013, at least 7.5% of the energy consumed by the government must be renewable energy. In an effort to help meet this goal, Johnson Space Center (JSC) is considering installing a solar power generation facility. The purpose of this project is to conduct a feasibility investigation for such a facility. Because Kennedy Space Center (KSC) has a solar power generation facility, the first step in this investigation is to learn about KSC's facility and obtain information on how it was constructed. After collecting this information, the following must be determined: the amount of power desired, the size of the facility, potential locations for it, and estimated construction and maintenance costs. Contacts with JSC's energy provider must also be established to determine if a partnership would be agreeable to both parties. Lastly, all of this data must be analyzed to decide whether or not JSC should construct the facility. The results from analyzing the data collected indicate that a 200 kW facility would provide enough energy to meet 1% of JSC's energy demand. This facility would require less than 1 acre of land. In the map below, potential locations are shown in green. The solar power facility is projected to cost $2 M. So far, the information collected indicates that such a facility could be constructed. The next steps in this investigation include contacting JSC's energy provider, CenterPoint Energy, to discuss entering a partnership; developing a life cycle cost analysis to determine payback time; developing more detailed plans; and securing funding.
High energy physics at UC Riverside
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-07-01
This report discusses progress made for the following two tasks: experimental high energy physics, Task A, and theoretical high energy physics, Task B. Task A1 covers hadron collider physics. Information for Task A1 includes: personnel/talks/publications; D0: proton-antiproton interactions at 2 TeV; SDC: proton-proton interactions at 40 TeV; computing facilities; equipment needs; and budget notes. The physics program of Task A2 has been the systematic study of leptons and hadrons. Information covered for Task A2 includes: personnel/talks/publications; OPAL at LEP; OPAL at LEP200; CMS at LHC; the RD5 experiment; LSND at LAMPF; and budget notes. The research activities of the Theorymore » Group are briefly discussed and a list of completed or published papers for this period is given.« less
10 CFR 611.206 - Existing facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Existing facilities. 611.206 Section 611.206 Energy... PROGRAM Facility/Funding Awards § 611.206 Existing facilities. The Secretary shall, in making awards to those manufacturers that have existing facilities, give priority to those facilities that are oldest or...
Future Computer Requirements for Computational Aerodynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.
Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryne, Robert D.
2006-08-10
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less
NASA Astrophysics Data System (ADS)
Ryne, Robert D.
2006-09-01
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.
Facilities Management via Computer: Information at Your Fingertips.
ERIC Educational Resources Information Center
Hensey, Susan
1996-01-01
Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... DEPARTMENT OF ENERGY DOE Response to Recommendation 2011-1 of the Defense Nuclear Facilities... Nuclear Facilities Safety Board, Office of Health, Safety and Security, U.S. Department of Energy, 1000... Department of Energy (DOE) acknowledges receipt of Defense Nuclear Facilities Safety Board (Board...
18 CFR 292.205 - Criteria for qualifying cogeneration facilities.
Code of Federal Regulations, 2013 CFR
2013-04-01
... useful thermal energy output of the facility must be no less than 5 percent of the total energy output... the facility plus one-half the useful thermal energy output, during the 12-month period beginning with... (B) If the useful thermal energy output is less than 15 percent of the total energy output of the...
18 CFR 292.205 - Criteria for qualifying cogeneration facilities.
Code of Federal Regulations, 2012 CFR
2012-04-01
... useful thermal energy output of the facility must be no less than 5 percent of the total energy output... the facility plus one-half the useful thermal energy output, during the 12-month period beginning with... (B) If the useful thermal energy output is less than 15 percent of the total energy output of the...
18 CFR 292.205 - Criteria for qualifying cogeneration facilities.
Code of Federal Regulations, 2011 CFR
2011-04-01
... useful thermal energy output of the facility must be no less than 5 percent of the total energy output... the facility plus one-half the useful thermal energy output, during the 12-month period beginning with... (B) If the useful thermal energy output is less than 15 percent of the total energy output of the...
18 CFR 292.205 - Criteria for qualifying cogeneration facilities.
Code of Federal Regulations, 2014 CFR
2014-04-01
... useful thermal energy output of the facility must be no less than 5 percent of the total energy output... the facility plus one-half the useful thermal energy output, during the 12-month period beginning with... (B) If the useful thermal energy output is less than 15 percent of the total energy output of the...
Legal requirements for human-health based appeals of wind energy projects in ontario.
Engel, Albert M
2014-01-01
In 2009, the government of the province of Ontario, Canada passed new legislation to promote the development of renewable energy facilities, including wind energy facilities in the province. Throughout the legislative process, concerns were raised with respect to the effect of wind energy facilities on human health. Ultimately, the government established setbacks and sound level limits for wind energy facilities and provided Ontario residents with the right to appeal the approval of a wind energy facility on the ground that engaging in the facility in accordance with its approval will cause serious harm to human health. The first approval of a wind facility under the new legislation was issued in 2010 and since then, Ontario's Environmental Review Tribunal as well as Ontario's courts has been considering evidence proffered by appellants seeking revocation of approvals on the basis of serious harm to human health. To date, the evidence has been insufficient to support the revocation of a wind facility approval. This article reviews the legal basis for the dismissal of human-health based appeals.
First-Principles Thermodynamics Study of Spinel MgAl 2 O 4 Surface Stability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Qiuxia; Wang, Jian-guo; Wang, Yong
The surface stability of all possible terminations for three low-index (111, 110, 100) structures of the spinel MgAl2O4 has been studied using first-principles based thermodynamic approach. The surface Gibbs free energy results indicate that the 100_AlO2 termination is the most stable surface structure under ultra-high vacuum at T=1100 K regardless of Al-poor or Al-rich environment. With increasing oxygen pressure, the 111_O2(Al) termination becomes the most stable surface in the Al-rich environment. The oxygen vacancy formation is thermodynamically favorable over the 100_AlO2, 111_O2(Al) and the (111) structure with Mg/O connected terminations. On the basis of surface Gibbs free energies for bothmore » perfect and defective surface terminations, the 100_AlO2 and 111_O2(Al) are the most dominant surfaces in Al-rich environment under atmospheric condition. This is also consistent with our previously reported experimental observation. This work was supported by a Laboratory Directed Research and Development (LDRD) project of the Pacific Northwest National Laboratory (PNNL). The computing time was granted by the National Energy Research Scientific Computing Center (NERSC). Part of computing time was also granted by a scientific theme user proposal in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at PNNL in Richland, Washington.« less
Mapping suitability areas for concentrated solar power plants using remote sensing data
Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.
2015-05-14
The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less
Computational Tools and Facilities for the Next-Generation Analysis and Design Environment
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)
1997-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-03-17
This report summarizes all work for the Energy Survey of Army Industrial Facilities, Energy Engineering Analysis Program (EEAP) at the Western Area Demilitarization Facility (WADF) of the Hawthorne Army Ammunition Plant (HWAAP), Hawthorne, Nevada, authorized under Contract No. DACA03-92-C-0155 with the U.S. Army Corps of Engineers, Sacramento District, California. The purpose of this energy survey is to develop a set of projects and actions that will reduce energy consumption and operating costs of selected facilities at the WADF. A preliminary inspection of facilities at WADF by Keller Gannon that identified potential retrofit opportunities was submitted as the EEAP Study andmore » Criteria Review in December 1993. This document formed the basis of the Detailed Scope of Work for this study. Facilities included in the survey and study, together with operational status, are listed in Table 1 - 1. The complete scope of work appears in Appendix.« less
Evaluating Past and Future USCG Use of Ohmsett Test Facility
2016-10-01
and Renewable Energy Test Facility, that was previously known as a fully capitalized acronym, Ohmsett. This facility is located on the U.S. Naval...Oil Spill Response Research and Renewable Energy Test Facility, that was previously known as a fully capitalized acronym, Ohmsett. This facility is...Incident Management Systems NSF National Strike Force NWS Naval Weapons Station Ohmsett National Oil Spill Response Research and Renewable Energy
18 CFR 292.204 - Criteria for qualifying small power production facilities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... primary energy source of the facility must be biomass, waste, renewable resources, geothermal resources... FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY REGULATORY... production facilities that use the same energy resource, are owned by the same person(s) or its affiliates...
2014 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, James R.; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.
2015 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, James R.; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.
Code of Federal Regulations, 2012 CFR
2012-01-01
... over nuclear facilities and materials under the Atomic Energy Act. 8.4 Section 8.4 Energy NUCLEAR... nuclear facilities and materials under the Atomic Energy Act. (a) By virtue of the Atomic Energy Act of... Atomic Energy Act of 1954 sets out a pattern for licensing and regulation of certain nuclear materials...
Code of Federal Regulations, 2010 CFR
2010-01-01
... over nuclear facilities and materials under the Atomic Energy Act. 8.4 Section 8.4 Energy NUCLEAR... nuclear facilities and materials under the Atomic Energy Act. (a) By virtue of the Atomic Energy Act of... Atomic Energy Act of 1954 sets out a pattern for licensing and regulation of certain nuclear materials...
Code of Federal Regulations, 2011 CFR
2011-01-01
... over nuclear facilities and materials under the Atomic Energy Act. 8.4 Section 8.4 Energy NUCLEAR... nuclear facilities and materials under the Atomic Energy Act. (a) By virtue of the Atomic Energy Act of... Atomic Energy Act of 1954 sets out a pattern for licensing and regulation of certain nuclear materials...
10 CFR 1004.3 - Public reading facilities and policy on contractor records.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Public reading facilities and policy on contractor records. 1004.3 Section 1004.3 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) FREEDOM OF INFORMATION § 1004.3 Public reading facilities and policy on contractor records. (a) The DOE Headquarters will maintain, in the public reading facilities, the...
10 CFR 1004.3 - Public reading facilities and policy on contractor records.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Public reading facilities and policy on contractor records. 1004.3 Section 1004.3 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) FREEDOM OF INFORMATION § 1004.3 Public reading facilities and policy on contractor records. (a) The DOE Headquarters will maintain, in the public reading facilities, the...
10 CFR 1004.3 - Public reading facilities and policy on contractor records.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Public reading facilities and policy on contractor records. 1004.3 Section 1004.3 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) FREEDOM OF INFORMATION § 1004.3 Public reading facilities and policy on contractor records. (a) The DOE Headquarters will maintain, in the public reading facilities, the...
10 CFR 1004.3 - Public reading facilities and policy on contractor records.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Public reading facilities and policy on contractor records. 1004.3 Section 1004.3 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) FREEDOM OF INFORMATION § 1004.3 Public reading facilities and policy on contractor records. (a) The DOE Headquarters will maintain, in the public reading facilities, the...
Survey of EPA facilities for solar thermal energy applications
NASA Technical Reports Server (NTRS)
Nelson, E. V.; Overly, P. T.; Bell, D. M.
1980-01-01
A study was done to assess the feasibility of applying solar thermal energy systems to EPA facilities. A survey was conducted to determine those EPA facilities where solar energy could best be used. These systems were optimized for each specific application and the system/facility combinations were ranked on the basis of greatest cost effectiveness.
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
The Legnaro-Padova distributed Tier-2: challenges and results
NASA Astrophysics Data System (ADS)
Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola
2014-06-01
The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.
Computer Operating System Maintenance.
1982-06-01
FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginovska-Pangovska, Bojana; Autrey, Thomas; Parab, Kshitij K.
We report on a combined computational and experimental study of the activation of hydrogen using for 2,6-lutidine (Lut)/BCl3 Lewis pairs. Herein we describe the synthetic approach used to obtain a new FLP, Lut-BCl3 that activates molecular H2 at ~10 bar, 100 °C in toluene or lutidine as the solvent. The resulting compound is an unexpected neutral hydride, LutBHCl2, rather than the ion pair, which we attribute to ligand redistribution. The mechanism for activation was modeled with density functional theory and accurate G3(MP2)B3 theory. The dative bond in Lut-BCl3 is calculated to have a bond enthalpy of 15 kcal/mol. The separatedmore » pair is calculated to react with H2 and form the [LutH+][HBCl3–] ion pair with a barrier of 13 kcal/mol. Metathesis with LutBCl3 produces LutBHCl2 and [LutH][BCl4]. The overall reaction is exothermic by 8.5 kcal/mol. An alternative pathway was explored involving lutidine–borenium cation pair activating H2. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Division of Chemical Sciences, Biosciences, and Geosciences, and was performed in part using the Molecular Science Computing Facility (MSCF) in the William R. Wiley Environmental Molecular Sciences Laboratory, a DOE national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for DOE.« less
NASA Technical Reports Server (NTRS)
1981-01-01
The development of a coal gasification system design and mass and energy balance simulation program for the TVA and other similar facilities is described. The materials-process-product model (MPPM) and the advanced system for process engineering (ASPEN) computer program were selected from available steady state and dynamic models. The MPPM was selected to serve as the basis for development of system level design model structure because it provided the capability for process block material and energy balance and high-level systems sizing and costing. The ASPEN simulation serves as the basis for assessing detailed component models for the system design modeling program. The ASPEN components were analyzed to identify particular process blocks and data packages (physical properties) which could be extracted and used in the system design modeling program. While ASPEN physical properties calculation routines are capable of generating physical properties required for process simulation, not all required physical property data are available, and must be user-entered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gohar, Y.; Smith, D. L.; Nuclear Engineering Division
2010-04-28
The YALINA facility is a zero-power, sub-critical assembly driven by a conventional neutron generator. It was conceived, constructed, and put into operation at the Radiation Physics and Chemistry Problems Institute of the National Academy of Sciences of Belarus located in Minsk-Sosny, Belarus. This facility was conceived for the purpose of investigating the static and dynamic neutronics properties of accelerator driven sub-critical systems, and to serve as a neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinide nuclei. This report provides a detailed description of this facility and documents the progress of research carried outmore » there during a period of approximately a decade since the facility was conceived and built until the end of 2008. During its history of development and operation to date (1997-2008), the YALINA facility has hosted several foreign groups that worked with the resident staff as collaborators. The participation of Argonne National Laboratory in the YALINA research programs commenced in 2005. For obvious reasons, special emphasis is placed in this report on the work at YALINA facility that has involved Argonne's participation. Attention is given here to the experimental program at YALINA facility as well as to analytical investigations aimed at validating codes and computational procedures and at providing a better understanding of the physics and operational behavior of the YALINA facility in particular, and ADS systems in general, during the period 1997-2008.« less
10 CFR 55.46 - Simulation facilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Simulation facilities. 55.46 Section 55.46 Energy NUCLEAR... Simulation facilities. (a) General. This section addresses the use of a simulation facility for the... applicants for operator and senior operator licenses. (b) Commission-approved simulation facilities and...
10 CFR 55.46 - Simulation facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Simulation facilities. 55.46 Section 55.46 Energy NUCLEAR... Simulation facilities. (a) General. This section addresses the use of a simulation facility for the... applicants for operator and senior operator licenses. (b) Commission-approved simulation facilities and...
10 CFR 55.46 - Simulation facilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Simulation facilities. 55.46 Section 55.46 Energy NUCLEAR... Simulation facilities. (a) General. This section addresses the use of a simulation facility for the... applicants for operator and senior operator licenses. (b) Commission-approved simulation facilities and...
10 CFR 55.46 - Simulation facilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Simulation facilities. 55.46 Section 55.46 Energy NUCLEAR... Simulation facilities. (a) General. This section addresses the use of a simulation facility for the... applicants for operator and senior operator licenses. (b) Commission-approved simulation facilities and...
10 CFR 55.46 - Simulation facilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Simulation facilities. 55.46 Section 55.46 Energy NUCLEAR... Simulation facilities. (a) General. This section addresses the use of a simulation facility for the... applicants for operator and senior operator licenses. (b) Commission-approved simulation facilities and...
10 CFR 26.123 - Testing facility capabilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 1 2014-01-01 2014-01-01 false Testing facility capabilities. 26.123 Section 26.123 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.123 Testing facility capabilities. Each licensee testing facility shall have the capability, at the same...
10 CFR 26.123 - Testing facility capabilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 1 2012-01-01 2012-01-01 false Testing facility capabilities. 26.123 Section 26.123 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.123 Testing facility capabilities. Each licensee testing facility shall have the capability, at the same...
10 CFR 26.123 - Testing facility capabilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 1 2013-01-01 2013-01-01 false Testing facility capabilities. 26.123 Section 26.123 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.123 Testing facility capabilities. Each licensee testing facility shall have the capability, at the same...
10 CFR 26.123 - Testing facility capabilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Testing facility capabilities. 26.123 Section 26.123 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.123 Testing facility capabilities. Each licensee testing facility shall have the capability, at the same...
10 CFR 26.123 - Testing facility capabilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Testing facility capabilities. 26.123 Section 26.123 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.123 Testing facility capabilities. Each licensee testing facility shall have the capability, at the same...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, James
Strategic energy management (SEM) focuses on achieving energy-efficiency improvements through systematic and planned changes in facility operations, maintenance, and behaviors (OM&B) and capital equipment upgrades in large energy-using facilities, including industrial buildings, commercial buildings, and multi-facility organizations such as campuses or communities. Facilities can institute a spectrum of SEM actions, ranging from a simple process for regularly identifying energy-savings actions, to establishing a formal, third-party recognized or certified SEM framework for continuous improvement of energy performance. In general, SEM programs that would be considered part of a utility program will contain a set of energy-reducing goals, principles, and practices emphasizingmore » continuous improvements in energy performance or savings through energy management and an energy management system (EnMS).« less
Locations and attributes of wind turbines in Colorado, 2009
Carr, Natasha B.; Diffendorfer, Jay E.; Fancher, Tammy S.; Latysh, Natalie E.; Leib, Kenneth J.; Matherne, Anne-Marie; Turner, Christine
2011-01-01
The Colorado wind-turbine data series provides geospatial data for all wind turbines established within the State as of August 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, and county. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, year the facility went online, and development status of wind facility. Turbine locations were derived from August 2009 1-meter true-color aerial photographs produced by the National Agriculture Imagery Program; the photographs have a positional accuracy of about + or - 5 meters. The location of turbines under construction during August 2009 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas currently (2011) in development by the U.S. Geological Survey. The Energy Atlas will synthesize data on existing and potential energy development in Colorado and New Mexico and will include additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools will be included in the Energy Atlas. The format of the Energy Atlas will facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
Locations and attributes of wind turbines in New Mexico, 2009
Carr, Natasha B.; Diffendorfer, Jay E.; Fancher, Tammy S.; Latysh, Natalie E.; Leib, Kenneth J.; Matherne, Anne-Marie; Turner, Christine
2011-01-01
The New Mexico wind-turbine data series provides geospatial data for all wind turbines established within the State as of August 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, and county. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, year the facility went online, and development status of wind facility. Turbine locations were derived from 1-meter August 2009 true-color aerial photographs produced by the National Agriculture Imagery Program; the photographs have a positional accuracy of about + or - 5 meters. The location of turbines under construction during August 2009 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas currently (2011) in development by the U.S. Geological Survey. The Energy Atlas will synthesize data on existing and potential energy development in Colorado and New Mexico and will include additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools will be included in the Energy Atlas. The format of the Energy Atlas will facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
Oak Ridge Institutional Cluster Autotune Test Drive Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jibonananda, Sanyal; New, Joshua Ryan
2014-02-01
The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less
Spent nuclear fuel assembly inspection using neutron computed tomography
NASA Astrophysics Data System (ADS)
Pope, Chad Lee
The research presented here focuses on spent nuclear fuel assembly inspection using neutron computed tomography. Experimental measurements involving neutron beam transmission through a spent nuclear fuel assembly serve as benchmark measurements for an MCNP simulation model. Comparison of measured results to simulation results shows good agreement. Generation of tomography images from MCNP tally results was accomplished using adapted versions of built in MATLAB algorithms. Multiple fuel assembly models were examined to provide a broad set of conclusions. Tomography images revealing assembly geometric information including the fuel element lattice structure and missing elements can be obtained using high energy neutrons. A projection difference technique was developed which reveals the substitution of unirradiated fuel elements for irradiated fuel elements, using high energy neutrons. More subtle material differences such as altering the burnup of individual elements can be identified with lower energy neutrons provided the scattered neutron contribution to the image is limited. The research results show that neutron computed tomography can be used to inspect spent nuclear fuel assemblies for the purpose of identifying anomalies such as missing elements or substituted elements. The ability to identify anomalies in spent fuel assemblies can be used to deter diversion of material by increasing the risk of early detection as well as improve reprocessing facility operations by confirming the spent fuel configuration is as expected or allowing segregation if anomalies are detected.
On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Alunni, Antonella I.
2012-01-01
This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The vision described here builds on the present U.S. activities in fusion plasma and materials science relevant to the energy goal and extends plasma science at the frontier of discovery. The plan is founded on recommendations made by the National Academies, a number of recent studies by the Fusion Energy Sciences Advisory Committee (FESAC), and the Administration’s views on the greatest opportunities for U.S. scientific leadership.This report highlights five areas of critical importance for the U.S. fusion energy sciences enterprise over the next decade: 1) Massively parallel computing with the goal of validated whole-fusion-device modeling will enable a transformation inmore » predictive power, which is required to minimize risk in future fusion energy development steps; 2) Materials science as it relates to plasma and fusion sciences will provide the scientific foundations for greatly improved plasma confinement and heat exhaust; 3) Research in the prediction and control of transient events that can be deleterious to toroidal fusion plasma confinement will provide greater confidence in machine designs and operation with stable plasmas; 4) Continued stewardship of discovery in plasma science that is not expressly driven by the energy goal will address frontier science issues underpinning great mysteries of the visible universe and help attract and retain a new generation of plasma/fusion science leaders; 5) FES user facilities will be kept world-leading through robust operations support and regular upgrades. Finally, we will continue leveraging resources among agencies and institutions and strengthening our partnerships with international research facilities.« less
NASA Astrophysics Data System (ADS)
Babaev, A. A.; Pivovarov, Yu L.
2010-04-01
Resonant coherent excitation (RCE) of relativistic hydrogen-like ions is investigated by computer simulations methods. The suggested theoretical model is applied to the simulations of recent experiments on RCE of 390 MeV/u Ar17+ ions under (220) planar channeling in a Si crystal performed by T.Azuma et al at HIMAC (Tokyo). Theoretical results are in a good agreement with these experimental data and clearly show the appearance of the doublet structure of RCE peaks. The simulations are also extended to greater ion energies in order to predict the new RCE features at the future accelerator facility FAIR OSI and as an example, RCE of II GeV/u U91+ ions is considered in detail.
Supercomputing Sheds Light on the Dark Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Heitmann, Katrin
2012-11-15
At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
Communication and computing technology in biocontainment laboratories using the NEIDL as a model.
McCall, John; Hardcastle, Kath
2014-07-01
The National Emerging Infectious Diseases Laboratories (NEIDL), Boston University, is a globally unique biocontainment research facility housing biosafety level 2 (BSL-2), BSL-3, and BSL-4 laboratories. Located in the BioSquare area at the University's Medical Campus, it is part of a national network of secure facilities constructed to study infectious diseases of major public health concern. The NEIDL allows for basic, translational, and clinical phases of research to be carried out in a single facility with the overall goal of accelerating understanding, treatment, and prevention of infectious diseases. The NEIDL will also act as a center of excellence providing training and education in all aspects of biocontainment research. Within every detail of NEIDL operations is a primary emphasis on safety and security. The ultramodern NEIDL has required a new approach to communications technology solutions in order to ensure safety and security and meet the needs of investigators working in this complex building. This article discusses the implementation of secure wireless networks and private cloud computing to promote operational efficiency, biosecurity, and biosafety with additional energy-saving advantages. The utilization of a dedicated data center, virtualized servers, virtualized desktop integration, multichannel secure wireless networks, and a NEIDL-dedicated Voice over Internet Protocol (VoIP) network are all discussed. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
10 CFR 4.127 - Existing facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Existing facilities. 4.127 Section 4.127 Energy NUCLEAR... 1973, as Amended Discriminatory Practices § 4.127 Existing facilities. (a) Accessibility. A recipient... make each of its existing facilities or every part of an existing facility accessible to and usable by...
10 CFR 1042.410 - Comparable facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Comparable facilities. 1042.410 Section 1042.410 Energy... Activities Prohibited § 1042.410 Comparable facilities. A recipient may provide separate toilet, locker room, and shower facilities on the basis of sex, but such facilities provided for students of one sex shall...
10 CFR 1040.72 - Existing facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Existing facilities. 1040.72 Section 1040.72 Energy... § 1040.72 Existing facilities. (a) Accessibility. A recipient shall operate any program or activity to... facilities or every part of a facility accessible to and useable by handicapped persons. (b) Methods. A...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lekov, Alex; Thompson, Lisa; McKane, Aimee
This report summarizes the Lawrence Berkeley National Laboratory?s research to date in characterizing energy efficiency and automated demand response opportunities for wastewater treatment facilities in California. The report describes the characteristics of wastewater treatment facilities, the nature of the wastewater stream, energy use and demand, as well as details of the wastewater treatment process. It also discusses control systems and energy efficiency and automated demand response opportunities. In addition, several energy efficiency and load management case studies are provided for wastewater treatment facilities.This study shows that wastewater treatment facilities can be excellent candidates for open automated demand response and thatmore » facilities which have implemented energy efficiency measures and have centralized control systems are well-suited to shift or shed electrical loads in response to financial incentives, utility bill savings, and/or opportunities to enhance reliability of service. Control technologies installed for energy efficiency and load management purposes can often be adapted for automated demand response at little additional cost. These improved controls may prepare facilities to be more receptive to open automated demand response due to both increased confidence in the opportunities for controlling energy cost/use and access to the real-time data.« less
NASA Technical Reports Server (NTRS)
Redhed, D. D.
1978-01-01
Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.
2016 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Jim; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.
48 CFR 926.7103 - Requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... preference in hiring to an eligible employee of Department of Energy Defense Nuclear Facilities. This right... and subcontractors employed at Department of Energy Defense Nuclear Facilities, to the extent... implementation of Section 3161 at the Department of Energy Defense Nuclear Facility and local counsel, should...
48 CFR 926.7103 - Requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... preference in hiring to an eligible employee of Department of Energy Defense Nuclear Facilities. This right... and subcontractors employed at Department of Energy Defense Nuclear Facilities, to the extent... implementation of Section 3161 at the Department of Energy Defense Nuclear Facility and local counsel, should...
48 CFR 926.7103 - Requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... preference in hiring to an eligible employee of Department of Energy Defense Nuclear Facilities. This right... and subcontractors employed at Department of Energy Defense Nuclear Facilities, to the extent... implementation of Section 3161 at the Department of Energy Defense Nuclear Facility and local counsel, should...
48 CFR 926.7103 - Requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... preference in hiring to an eligible employee of Department of Energy Defense Nuclear Facilities. This right... and subcontractors employed at Department of Energy Defense Nuclear Facilities, to the extent... implementation of Section 3161 at the Department of Energy Defense Nuclear Facility and local counsel, should...
48 CFR 926.7103 - Requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... preference in hiring to an eligible employee of Department of Energy Defense Nuclear Facilities. This right... and subcontractors employed at Department of Energy Defense Nuclear Facilities, to the extent... implementation of Section 3161 at the Department of Energy Defense Nuclear Facility and local counsel, should...
48 CFR 952.204-73 - Facility clearance.
Code of Federal Regulations, 2014 CFR
2014-10-01
....204-73 Section 952.204-73 Federal Acquisition Regulations System DEPARTMENT OF ENERGY CLAUSES AND... granted by the Secretary of Energy. In addition, a Facility Clearance and foreign ownership, control and... Department of Energy Facility Clearance generally need not resubmit the following foreign ownership...
48 CFR 952.204-73 - Facility clearance.
Code of Federal Regulations, 2011 CFR
2011-10-01
....204-73 Section 952.204-73 Federal Acquisition Regulations System DEPARTMENT OF ENERGY CLAUSES AND... granted by the Secretary of Energy. In addition, a Facility Clearance and foreign ownership, control and... Department of Energy Facility Clearance generally need not resubmit the following foreign ownership...
48 CFR 952.204-73 - Facility clearance.
Code of Federal Regulations, 2012 CFR
2012-10-01
....204-73 Section 952.204-73 Federal Acquisition Regulations System DEPARTMENT OF ENERGY CLAUSES AND... granted by the Secretary of Energy. In addition, a Facility Clearance and foreign ownership, control and... Department of Energy Facility Clearance generally need not resubmit the following foreign ownership...
48 CFR 952.204-73 - Facility clearance.
Code of Federal Regulations, 2013 CFR
2013-10-01
....204-73 Section 952.204-73 Federal Acquisition Regulations System DEPARTMENT OF ENERGY CLAUSES AND... granted by the Secretary of Energy. In addition, a Facility Clearance and foreign ownership, control and... Department of Energy Facility Clearance generally need not resubmit the following foreign ownership...
48 CFR 952.204-73 - Facility clearance.
Code of Federal Regulations, 2010 CFR
2010-10-01
....204-73 Section 952.204-73 Federal Acquisition Regulations System DEPARTMENT OF ENERGY CLAUSES AND... granted by the Secretary of Energy. In addition, a Facility Clearance and foreign ownership, control and... Department of Energy Facility Clearance generally need not resubmit the following foreign ownership...
NASA Astrophysics Data System (ADS)
Peña-García, A.; Gómez-Lorente, D.; Espín, A.; Rabaza, O.
2016-06-01
New relationships between energy efficiency, illuminance uniformity, spacing and mounting height in public lighting installations were derived from the analysis of a large sample of outputs generated with a widely used software application for lighting design. These new relationships greatly facilitate the calculation of basic lighting installation parameters. The results obtained are also based on maximal energy efficiency and illuminance uniformity as a premise, which are not included in more conventional methods. However, these factors are crucial since they ensure the sustainability of the installations. This research formulated, applied and analysed these new equations. The results of this study highlight their usefulness in rapid planning and urban planning in developing countries or areas affected by natural disasters where engineering facilities and computer applications for this purpose are often unavailable.
Integrated Computational Materials Engineering for Magnesium in Automotive Body Applications
NASA Astrophysics Data System (ADS)
Allison, John E.; Liu, Baicheng; Boyle, Kevin P.; Hector, Lou; McCune, Robert
This paper provides an overview and progress report for an international collaborative project which aims to develop an ICME infrastructure for magnesium for use in automotive body applications. Quantitative processing-micro structure-property relationships are being developed for extruded Mg alloys, sheet-formed Mg alloys and high pressure die cast Mg alloys. These relationships are captured in computational models which are then linked with manufacturing process simulation and used to provide constitutive models for component performance analysis. The long term goal is to capture this information in efficient computational models and in a web-centered knowledge base. The work is being conducted at leading universities, national labs and industrial research facilities in the US, China and Canada. This project is sponsored by the U.S. Department of Energy, the U.S. Automotive Materials Partnership (USAMP), Chinese Ministry of Science and Technology (MOST) and Natural Resources Canada (NRCan).
NASA Astrophysics Data System (ADS)
Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang
2017-02-01
LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.
Energy Efficiency in Water and Wastewater Facilities
Learn how local governments have achieved sustained energy improvements at their water and wastewater facilities through equipment upgrades, operational modifications, and modifications to facility buildings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for coarse grained models of electrolyte solution. Here, we provide rigorous definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation (DFT-MD) and isolate the effects of charge and cavitation,more » comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to highly unphysical values for the solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry (CHA) for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation. We would like to thank Thomas Beck, Shawn Kathmann, Richard Remsing and John Weeks for helpful discussions. Computing resources were generously allocated by PNNL's Institutional Computing program. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. TTD, GKS, and CJM were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated by Battelle for the U.S. Department of Energy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.
Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.« less
Energy Sourcebook for Educational Facilities.
ERIC Educational Resources Information Center
Council of Educational Facility Planners, Columbus, OH.
The Council of Educational Facility Planners, International (CEFP/I) has assembled an authoritative and comprehensive sourcebook for the design and management of energy efficient educational facilities. Information that bridges the gap between scientific energy theory/research/technology and the needs of the educational community is published in…
Laser Simulations of the Destructive Impact of Nuclear Explosions on Hazardous Asteroids
NASA Astrophysics Data System (ADS)
Aristova, E. Yu.; Aushev, A. A.; Baranov, V. K.; Belov, I. A.; Bel'kov, S. A.; Voronin, A. Yu.; Voronich, I. N.; Garanin, R. V.; Garanin, S. G.; Gainullin, K. G.; Golubinskii, A. G.; Gorodnichev, A. V.; Denisova, V. A.; Derkach, V. N.; Drozhzhin, V. S.; Ericheva, I. A.; Zhidkov, N. V.; Il'kaev, R. I.; Krayukhin, A. A.; Leonov, A. G.; Litvin, D. N.; Makarov, K. N.; Martynenko, A. S.; Malinov, V. I.; Mis'ko, V. V.; Rogachev, V. G.; Rukavishnikov, A. N.; Salatov, E. A.; Skorochkin, Yu. V.; Smorchkov, G. Yu.; Stadnik, A. L.; Starodubtsev, V. A.; Starodubtsev, P. V.; Sungatullin, R. R.; Suslov, N. A.; Sysoeva, T. I.; Khatunkin, V. Yu.; Tsoi, E. S.; Shubin, O. N.; Yufa, V. N.
2018-01-01
We present the results of preliminary experiments at laser facilities in which the processes of the undeniable destruction of stony asteroids (chondrites) in space by nuclear explosions on the asteroid surface are simulated based on the principle of physical similarity. We present the results of comparative gasdynamic computations of a model nuclear explosion on the surface of a large asteroid and computations of the impact of a laser pulse on a miniature asteroid simulator confirming the similarity of the key processes in the fullscale and model cases. The technology of fabricating miniature mockups with mechanical properties close to those of stony asteroids is described. For mini-mockups 4-10 mm in size differing by the shape and impact conditions, we have made an experimental estimate of the energy threshold for the undeniable destruction of a mockup and investigated the parameters of its fragmentation at a laser energy up to 500 J. The results obtained confirm the possibility of an experimental determination of the criteria for the destruction of asteroids of various types by a nuclear explosion in laser experiments. We show that the undeniable destruction of a large asteroid is possible at attainable nuclear explosion energies on its surface.
10 CFR 5.410 - Comparable facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Comparable facilities. 5.410 Section 5.410 Energy NUCLEAR... Prohibited § 5.410 Comparable facilities. A recipient may provide separate toilet, locker room, and shower facilities on the basis of sex, but such facilities provided for students of one sex shall be comparable to...
Research Support Facility (RSF): Leadership in Building Performance (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This brochure/poster provides information on the features of the Research Support Facility including a detailed illustration of the facility with call outs of energy efficiency and renewable energy technologies. Imagine an office building so energy efficient that its occupants consume only the amount of energy generated by renewable power on the building site. The building, the Research Support Facility (RSF) occupied by the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) employees, uses 50% less energy than if it were built to current commercial code and achieves the U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED{reg_sign})more » Platinum rating. With 19% of the primary energy in the U.S. consumed by commercial buildings, the RSF is changing the way commercial office buildings are designed and built.« less
Neilson, Christine J
2010-01-01
The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.
Microstructural Response of Variably Hydrated Ca-Rich Montmorillonite to Supercritical CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Mal Soon; McGrail, B. Peter; Glezakou, Vassiliki Alexandra
2014-08-05
We report on ab initio molecular dynamics simulations of Ca-rich montmorillonite systems, in different hydration states in the presence of supercritical CO2. Analysis of the molecular trajectories provides estimates of the relative H2O:CO2 ratio per interspatial cation. The vibrational density of states in direct comparison with dipole moment derived IR spectra for these systems provide unique signatures that can used to follow molecular transformation. In a co-sequestration scenario, these signatures could be used to identify the chemical state and fate of Sulfur compounds. Interpretation of CO2 asymmetric stretch shift is given based on a detailed analysis of scCO2 structure andmore » intermolecular interactions of the intercalated species. Based on our simulations, smectites with higher charge interlayer cations at sub-single to single hydration states should be more efficient in capturing CO2, while maintaining caprock integrity. This research would not have been possible without the support of the office of Fossil Energy, Department of Energy. The computational resources were made available through a user proposal of the EMSL User facility, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory.« less
A large-scale computer facility for computational aerodynamics
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Ballhaus, W. F., Jr.
1985-01-01
As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.
Energy Systems Integration Facility Insight Center | Energy Systems
simulation data. Photo of researchers studying data on a 3-D power system profile depicting the interaction of renewable energy resources on the grid. Capabilities The Insight Center offers the following Integration Facility Insight Center Located adjacent to the Energy System Integration Facility's High
10 CFR 205.378 - Disconnection of temporary facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Disconnection of temporary facilities. 205.378 Section 205.378 Energy DEPARTMENT OF ENERGY OIL ADMINISTRATIVE PROCEDURES AND SANCTIONS Electric Power System... Electric Facilities and the Transfer of Electricity to Alleviate An Emergency Shortage of Electric Power...
10 CFR 850.27 - Hygiene facilities and practices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Hygiene facilities and practices. 850.27 Section 850.27 Energy DEPARTMENT OF ENERGY CHRONIC BERYLLIUM DISEASE PREVENTION PROGRAM Specific Program Requirements § 850.27 Hygiene facilities and practices. (a) General. The responsible employer must assure that in...
NIF ICCS network design and loading analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tietbohl, G; Bryant, R
The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less
Kim, Choong-Ki; Toft, Jodie E; Papenfus, Michael; Verutes, Gregory; Guerry, Anne D; Ruckelshaus, Marry H; Arkema, Katie K; Guannel, Gregory; Wood, Spencer A; Bernhardt, Joanna R; Tallis, Heather; Plummer, Mark L; Halpern, Benjamin S; Pinsky, Malin L; Beck, Michael W; Chan, Francis; Chan, Kai M A; Levin, Phil S; Polasky, Stephen
2012-01-01
Many hope that ocean waves will be a source for clean, safe, reliable and affordable energy, yet wave energy conversion facilities may affect marine ecosystems through a variety of mechanisms, including competition with other human uses. We developed a decision-support tool to assist siting wave energy facilities, which allows the user to balance the need for profitability of the facilities with the need to minimize conflicts with other ocean uses. Our wave energy model quantifies harvestable wave energy and evaluates the net present value (NPV) of a wave energy facility based on a capital investment analysis. The model has a flexible framework and can be easily applied to wave energy projects at local, regional, and global scales. We applied the model and compatibility analysis on the west coast of Vancouver Island, British Columbia, Canada to provide information for ongoing marine spatial planning, including potential wave energy projects. In particular, we conducted a spatial overlap analysis with a variety of existing uses and ecological characteristics, and a quantitative compatibility analysis with commercial fisheries data. We found that wave power and harvestable wave energy gradually increase offshore as wave conditions intensify. However, areas with high economic potential for wave energy facilities were closer to cable landing points because of the cost of bringing energy ashore and thus in nearshore areas that support a number of different human uses. We show that the maximum combined economic benefit from wave energy and other uses is likely to be realized if wave energy facilities are sited in areas that maximize wave energy NPV and minimize conflict with existing ocean uses. Our tools will help decision-makers explore alternative locations for wave energy facilities by mapping expected wave energy NPV and helping to identify sites that provide maximal returns yet avoid spatial competition with existing ocean uses.
Kim, Choong-Ki; Toft, Jodie E.; Papenfus, Michael; Verutes, Gregory; Guerry, Anne D.; Ruckelshaus, Marry H.; Arkema, Katie K.; Guannel, Gregory; Wood, Spencer A.; Bernhardt, Joanna R.; Tallis, Heather; Plummer, Mark L.; Halpern, Benjamin S.; Pinsky, Malin L.; Beck, Michael W.; Chan, Francis; Chan, Kai M. A.; Levin, Phil S.; Polasky, Stephen
2012-01-01
Many hope that ocean waves will be a source for clean, safe, reliable and affordable energy, yet wave energy conversion facilities may affect marine ecosystems through a variety of mechanisms, including competition with other human uses. We developed a decision-support tool to assist siting wave energy facilities, which allows the user to balance the need for profitability of the facilities with the need to minimize conflicts with other ocean uses. Our wave energy model quantifies harvestable wave energy and evaluates the net present value (NPV) of a wave energy facility based on a capital investment analysis. The model has a flexible framework and can be easily applied to wave energy projects at local, regional, and global scales. We applied the model and compatibility analysis on the west coast of Vancouver Island, British Columbia, Canada to provide information for ongoing marine spatial planning, including potential wave energy projects. In particular, we conducted a spatial overlap analysis with a variety of existing uses and ecological characteristics, and a quantitative compatibility analysis with commercial fisheries data. We found that wave power and harvestable wave energy gradually increase offshore as wave conditions intensify. However, areas with high economic potential for wave energy facilities were closer to cable landing points because of the cost of bringing energy ashore and thus in nearshore areas that support a number of different human uses. We show that the maximum combined economic benefit from wave energy and other uses is likely to be realized if wave energy facilities are sited in areas that maximize wave energy NPV and minimize conflict with existing ocean uses. Our tools will help decision-makers explore alternative locations for wave energy facilities by mapping expected wave energy NPV and helping to identify sites that provide maximal returns yet avoid spatial competition with existing ocean uses. PMID:23144824
Validating Innovative Renewable Energy Technologies: ESTCP Demonstrations at Two DoD Facilities
2011-11-01
4. TITLE AND SUBTITLE Validating Innovative Renewable Energy Technologies: ESTCP Demonstrations at Two DoD Facilities 5a. CONTRACT NUMBER 5b...goals of 25% of energy consumed required to be from renewable energy by 2025, the DoD has set aggressive, yet achievable targets. With its array of land...holdings facilities, and environments, the potential for renewable energy generation on DoD lands is great. Reaching these goals will require
Quantum-assisted biomolecular modelling.
Harris, Sarah A; Kendon, Vivien M
2010-08-13
Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
10 CFR 1016.39 - Termination, suspension, or revocation of security facility approval.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Termination, suspension, or revocation of security facility approval. 1016.39 Section 1016.39 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Control of Information § 1016.39 Termination, suspension, or revocation of security facility...
10 CFR 1016.39 - Termination, suspension, or revocation of security facility approval.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Termination, suspension, or revocation of security facility approval. 1016.39 Section 1016.39 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Control of Information § 1016.39 Termination, suspension, or revocation of security facility...
10 CFR 1016.11 - Cancellation of requests for security facility approval.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Cancellation of requests for security facility approval. 1016.11 Section 1016.11 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.11 Cancellation of requests for security facility approval. When a...
10 CFR 1016.11 - Cancellation of requests for security facility approval.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Cancellation of requests for security facility approval. 1016.11 Section 1016.11 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.11 Cancellation of requests for security facility approval. When a...
10 CFR 1016.10 - Grant, denial, or suspension of security facility approval.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Grant, denial, or suspension of security facility approval. 1016.10 Section 1016.10 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.10 Grant, denial, or suspension of security facility approval...
10 CFR 1016.10 - Grant, denial, or suspension of security facility approval.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Grant, denial, or suspension of security facility approval. 1016.10 Section 1016.10 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.10 Grant, denial, or suspension of security facility approval...
Supervisory Control and Data Acquisition System | Energy Systems
Integration Facility | NREL Supervisory Control and Data Acquisition System Supervisory Control supervisory control and data acquisition (SCADA) system monitors and controls safety systems and gathers real Energy Systems Integration Facility control room. The Energy Systems Integration Facility's SCADA system
10 CFR 1016.10 - Grant, denial, or suspension of security facility approval.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Grant, denial, or suspension of security facility approval. 1016.10 Section 1016.10 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.10 Grant, denial, or suspension of security facility approval...
10 CFR 50.78 - Facility information and verification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Facility information and verification. 50.78 Section 50.78 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF PRODUCTION AND UTILIZATION FACILITIES Us/iaea... International Atomic Energy Agency (IAEA) and take other action as necessary to implement the US/IAEA Safeguards...
10 CFR 50.78 - Facility information and verification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Facility information and verification. 50.78 Section 50.78 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF PRODUCTION AND UTILIZATION FACILITIES Us/iaea... International Atomic Energy Agency (IAEA) and take other action as necessary to implement the US/IAEA Safeguards...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-24
... support safe operation of Autoclave 2 of the facility have been constructed in accordance with the... Inspection Reports Regarding Louisiana Energy Services, National Enrichment Facility, Eunice, New Mexico... Louisiana Energy Services (LES), LLC, National Enrichment Facility in Eunice, New Mexico, and has authorized...
NASA Astrophysics Data System (ADS)
Ghosh, Reetuparna; Badwar, Sylvia; Lawriniang, Bioletty; Jyrwa, Betylda; Naik, Haldhara; Naik, Yeshwant; Suryanarayana, Saraswatula Venkata; Ganesan, Srinivasan
2017-08-01
The 58Fe (p , n)58Co reaction cross-section within Giant Dipole Resonance (GDR) region i.e. from 3.38 to 19.63 MeV was measured by stacked-foil activation and off-line γ-ray spectrometric technique using the BARC-TIFR Pelletron facility at Mumbai. The present data were compared with the existing literature data and found to be in good agreement. The 58Fe (p , n)58Co reaction cross-section as a function of proton energy was also theoretically calculated by using the computer code TALYS-1.8 and found to be in good agreement, which shows the validity of the TALYS-1.8 program.
The Nature of Scatter at the DARHT Facility and Suggestions for Improved Modeling of DARHT Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morneau, Rachel Anne; Klasky, Marc Louis
The U.S. Stockpile Stewardship Program [1] is designed to sustain and evaluate the nuclear weapons stockpile while foregoing underground nuclear tests. The maintenance of a smaller, aging U.S. nuclear weapons stockpile without underground testing requires complex computer calculations [14]. These calculations in turn need to be verified and benchmarked [14]. A wide range of research facilities have been used to test and evaluate nuclear weapons while respecting the Comprehensive Nuclear Test-Ban Treaty (CTBT) [2]. Some of these facilities include the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, the Z machine at Sandia National Laboratories, and the Dual Axismore » Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory. This research will focus largely on DARHT (although some information from Cygnus and the Los Alamos Microtron may be used in this research) by modeling it and comparing to experimental data. DARHT is an electron accelerator that employs high-energy flash x-ray sources for imaging hydro-tests. This research proposes to address some of the issues crucial to understanding DARHT Axis II and the analysis of the radiographic images produced. Primarily, the nature of scatter at DARHT will be modeled and verified with experimental data. It will then be shown that certain design decisions can be made to optimize the scatter field for hydrotest experiments. Spectral effects will be briefly explored to determine if there is any considerable effect on the density reconstruction caused by changes in the energy spectrum caused by target changes. Finally, a generalized scatter model will be made using results from MCNP that can be convolved with the direct transmission of an object to simulate the scatter of that object at the detector plane. The region in which with this scatter model is appropriate will be explored.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... facilities on my limited lease or any facilities on my project easement proposed under my GAP? 285.651 Section 285.651 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER...
Code of Federal Regulations, 2014 CFR
2014-01-01
... ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.2 Definitions. As used in this... used at a qualified renewable energy facility to generate electricity. Date of first use means, at the... excludes electric energy used within the renewable energy facility to power equipment such as pumps, motors...
Code of Federal Regulations, 2012 CFR
2012-01-01
... ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.2 Definitions. As used in this... used at a qualified renewable energy facility to generate electricity. Date of first use means, at the... excludes electric energy used within the renewable energy facility to power equipment such as pumps, motors...
Code of Federal Regulations, 2013 CFR
2013-01-01
... ENERGY ENERGY CONSERVATION RENEWABLE ENERGY PRODUCTION INCENTIVES § 451.2 Definitions. As used in this... used at a qualified renewable energy facility to generate electricity. Date of first use means, at the... excludes electric energy used within the renewable energy facility to power equipment such as pumps, motors...
Development of an Enhanced Payback Function for the Superior Energy Performance Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Therkelsen, Peter; Rao, Prakash; McKane, Aimee
2015-08-03
The U.S. DOE Superior Energy Performance (SEP) program provides recognition to industrial and commercial facilities that achieve certification to the ISO 50001 energy management system standard and third party verification of energy performance improvements. Over 50 industrial facilities are participating and 28 facilities have been certified in the SEP program. These facilities find value in the robust, data driven energy performance improvement result that the SEP program delivers. Previous analysis of SEP certified facility data demonstrated the cost effectiveness of SEP and identified internal staff time to be the largest cost component related to SEP implementation and certification. This papermore » analyzes previously reported and newly collected data of costs and benefits associated with the implementation of an ISO 50001 and SEP certification. By disaggregating “sunk energy management system (EnMS) labor costs”, this analysis results in a more accurate and detailed understanding of the costs and benefits of SEP participation. SEP is shown to significantly improve and sustain energy performance and energy cost savings, resulting in a highly attractive return on investment. To illustrate these results, a payback function has been developed and is presented. On average facilities with annual energy spend greater than $2M can expect to implement SEP with a payback of less than 1.5 years. Finally, this paper also observes and details decreasing facility costs associated with implementing ISO 50001 and certifying to the SEP program, as the program has improved from pilot, to demonstration, to full launch.« less
Code of Federal Regulations, 2013 CFR
2013-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
Local Aqueous Solvation Structure Around Ca2+ During Ca2+---Cl– Pair Formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, Marcel D.; Mundy, Christopher J.
2016-03-03
The molecular details of single ion solvation around Ca2+ and ion-pairing of Ca2--Cl- are investigated using ab initio molecular dynamics. The use of empirical dispersion corrections to the BLYP functional are investigated by comparison to experimentally available extended X-ray absorption fine structure (EXAFS) measurements, which probes the first solvation shell in great detail. Besides finding differences in the free-energy for both ion-pairing and the coordination number of ion solvation between the quantum and classical descriptions of interaction, there were important differences found between dispersion corrected and uncorrected density functional theory (DFT). Specifically, we show significantly different free-energy landscapes for bothmore » coordination number of Ca2+ and its ion-pairing with Cl- depending on the DFT simulation protocol. Our findings produce a self-consistent treatment of short-range solvent response to the ion and the intermediate to long-range collective response of the electrostatics of the ion-ion interaction to produce a detailed picture of ion-pairing that is consistent with experiment. MDB is supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative at Pacific Northwest National Laboratory. It was conducted under the Laboratory Directed Research and Development Program at PNNL, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. CJM acknowledges support from US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Additional computing resources were generously allocated by PNNL's Institutional Computing program. The authors thank Prof. Tom Beck for discussions regarding QCT, and Drs. Greg Schenter and Shawn Kathmann for insightful comments.« less
The CERN-EU high-energy Reference Field (CERF) facility: applications and latest developments
NASA Astrophysics Data System (ADS)
Silari, Marco; Pozzi, Fabio
2017-09-01
The CERF facility at CERN provides an almost unique high-energy workplace reference radiation field for the calibration and test of radiation protection instrumentation employed at high-energy accelerator facilities and for aircraft and space dosimetry. This paper describes the main features of the facility and supplies a non-exhaustive list of recent (as of 2005) applications for which CERF is used. Upgrade work started in 2015 to provide the scientific and industrial communities with a state-of-the-art reference facility is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Joseph A.; Wolke, Conrad T.; Johnson, Mark A.
In this Article, we review the role of gas-phase, size-selected protonated water clusters, H+(H2O)n, in the analysis of the microscopic mechanics responsible for the behavior of the excess proton in bulk water. We extend upon previous studies of the smaller, two-dimensional sheet-like structures to larger (n≥10) assemblies with three-dimensional cage morphologies which better mimic the bulk environment. Indeed, clusters in which a complete second solvation shell forms around a surface-embedded hydronium ion yield vibrational spectra where the signatures of the proton defect display strikingly similar positions and breadth to those observed in dilute acids. We investigate effects of the localmore » structure and intermolecular interactions on the large red shifts observed in the proton vibrational signature upon cluster growth using various theoretical methods. We show that, in addition to sizeable anharmonic couplings, the position of the excess proton vibration can be traced to large increases in the electric field exerted on the embedded hydronium ion upon formation of the first and second solvation shells. MAJ acknowledges support from the U.S. Department of Energy under Grant No. DE-FG02- 06ER15800 as well as the facilities and staff of the Yale University Faculty of Arts and Sciences High Performance Computing Center, and by the National Science Foundation under Grant No. CNS 08-21132 that partially funded acquisition of the facilities. SMK and SSX acknowledge support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.« less
Phytozome Comparative Plant Genomics Portal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodstein, David; Batra, Sajeev; Carlson, Joseph
2014-09-09
The Dept. of Energy Joint Genome Institute is a genomics user facility supporting DOE mission science in the areas of Bioenergy, Carbon Cycling, and Biogeochemistry. The Plant Program at the JGI applies genomic, analytical, computational and informatics platforms and methods to: 1. Understand and accelerate the improvement (domestication) of bioenergy crops 2. Characterize and moderate plant response to climate change 3. Use comparative genomics to identify constrained elements and infer gene function 4. Build high quality genomic resource platforms of JGI Plant Flagship genomes for functional and experimental work 5. Expand functional genomic resources for Plant Flagship genomes
1980-08-01
orientation, and HVAC systems have on three Army buildings in five different climatic regions. f Optimization of EnerV Usage in Military Facilities...The clinic’s environment is maintained by a multizone air-handling unit served by its own boiler and chiller . The building was modeled with 30... setpoints for the space temperature. This type of throttling range allows the heating system to control around a throttling range of 67 to 69oF (19 to 200
Federated data storage and management infrastructure
NASA Astrophysics Data System (ADS)
Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.
2016-10-01
The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lampley, C.M.
1979-01-01
An updated version of the SKYSHINE Monte Carlo procedure has been developed. The new computer code, SKYSHINE-II, provides a substantial increase in versatility in that the program possesses the ability to address three types of point-isotropic radiation sources: (1) primary gamma rays, (2) neutrons, and (3) secondary gamma rays. In addition, the emitted radiation may now be characterized by an energy emission spectrum product of a new energy-dependent atmospheric transmission data base developed by Radiation Research Associates, Inc. for each of the three source types described above. Most of the computational options present in the original program have been retainedmore » in the new version. Hence, the SKYSHINE-II computer code provides a versatile and viable tool for the analysis of the radiation environment in the vicinity of a building structure containing radiation sources, situated within the confines of a nuclear power plant. This report describes many of the calculational methods employed within the SKYSHINE-II program. A brief description of the new data base is included. Utilization instructions for the program are provided for operation of the SKYSHINE-II code on the Brookhaven National Laboratory Central Scientific Computing Facility. A listing of the source decks, block data routines, and the new atmospheric transmission data base are provided in the appendices of the report.« less
Energy Efficiency Feasibility Study and Resulting Plan for the Bay Mills Indian Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kushman, Chris
In 2011 the Inter-Tribal Council of Michigan, Inc. was awarded an Energy Efficiency Development and Deployment in Indian Country grant from the U.S. Department of Energy’s Tribal Energy Program. This grant aimed to study select Bay Mills Indian Community community/government buildings to determine what is required to reduce each building’s energy consumption by 30%. The Bay Mills Indian Community (BMIC) buildings with the largest expected energy use were selected for this study and included the Bay Mills Ellen Marshall Health Center building, Bay Mills Indian Community Administration Building, Bay Mills Community College main campus, Bay Mills Charter School and themore » Waishkey Community Center buildings. These five sites are the largest energy consuming Community buildings and comprised the study area of this project titled “Energy Efficiency Feasibility Study and Resulting Plan for the Bay Mills Indian Community”. The end objective of this study, plan and the Tribe is to reduce the energy consumption at the Community’s most energy intensive buildings that will, in turn, reduce emissions at the source of energy production, reduce energy expenditures, create long lasting energy conscious practices and positively affect the quality of the natural environment. This project’s feasibility study and resulting plan is intended to act as a guide to the Community’s first step towards planned energy management within its buildings/facilities. It aims to reduce energy consumption by 30% or greater within the subject facilities with an emphasis on energy conservation and efficiency. The energy audits and related power consumption analyses conducted for this study revealed numerous significant energy conservation and efficiency opportunities for all of the subject sites/buildings. In addition, many of the energy conservation measures require no cost and serve to help balance other measures requiring capital investment. Reoccurring deficiencies relating to heating, cooling, thermostat setting inefficiencies, powering computers, lighting, items linked to weatherization and numerous other items were encountered that can be mitigated with the energy conservation measures developed and specified during the course of this project.« less
The UK Human Genome Mapping Project online computing service.
Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W
1992-04-01
This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.
Facility and Laboratory Equipment | Energy Systems Integration Facility |
Energy Systems Integration Facility is its infrastructure. In addition to extensive fixed laboratory . Photo of researchers testing building loads and power networks in the Systems Performance Laboratory
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Facilities | Photovoltaic Research | NREL
Centers (RTCs) The Department of Energy Regional Test Centers for solar technologies serve to validate PV development to provide foundational support for the photovoltaic (PV) industry and PV users. Photo of the Solar Research Energy Facility. Solar Energy Research Facility (SERF) The SERF houses various
Code of Federal Regulations, 2011 CFR
2011-07-01
... ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Rights of Use and Easement for Energy- and Marine-Related Activities Using Existing OCS Facilities Decommissioning An Alternate Use Rue § 285...
10 CFR 1016.8 - Approval for processing access permittees for security facility approval.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Approval for processing access permittees for security facility approval. 1016.8 Section 1016.8 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.8 Approval for processing access permittees for security facility...
10 CFR 1016.8 - Approval for processing access permittees for security facility approval.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Approval for processing access permittees for security facility approval. 1016.8 Section 1016.8 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.8 Approval for processing access permittees for security facility...
Thermal Distribution System | Energy Systems Integration Facility | NREL
Thermal Distribution System Thermal Distribution System The Energy Systems Integration Facility's integrated thermal distribution system consists of a thermal water loop connected to a research boiler and . Photo of the roof of the Energy Systems Integration Facility. The thermal distribution bus allows
Code of Federal Regulations, 2014 CFR
2014-07-01
... facilities on my limited lease or any facilities on my project easement proposed under my GAP? 585.651 Section 585.651 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Plans and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... facilities on my limited lease or any facilities on my project easement proposed under my GAP? 585.651 Section 585.651 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Plans and...
Code of Federal Regulations, 2012 CFR
2012-07-01
... facilities on my limited lease or any facilities on my project easement proposed under my GAP? 585.651 Section 585.651 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Plans and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-27
... that cascades number 1.5, 1.6, 1.7, 1.8, 2.1, and 2.4 as well as autoclave one of the facility have... 2.4 as well as autoclave one of the facility have been constructed in accordance with the... Facility Inspection Reports Regarding Louisiana Energy Services LLC, National Enrichment Facility, Eunice...
Analysis of energy conservation alternatives for standard Army building. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hittle, D.C.; O'Brien, R.E.; Percivall, G.S.
1983-03-01
This report describes energy conservation alternatives for five standard Army building designs. By surveying maps of major Army installations and using the Integrated Facilities System, the most popular designs were determined to be a two-company, rolling-pin-shaped barracks for enlisted personnel; a Type 64 barracks; a motor repair shop; a battalion headquarters and classroom building; and an enlisted personnel mess hall. The Building Loads Analysis and System Thermodynamics (BLAST) energy-analysis computer program was used to develop baseline energy consumption for each design based on the building descriptions and calibrated by comparison with the measured energy usage of similar buildings. Once themore » baseline was established, the BLAST program was used to study energy conservation alternatives (ECAs) which could be retrofit to the existing buildings. The ECAs included closing off air-handling units, adding storm windows, adding 2 in. (0.051 m) of exterior insulation to the walls, partially blocking the windows, adding roof insulation, putting up south overhangs, installing programmable thermostats, recovering heat from exhaust fans, installing temperature economizers, replacing lights, and installing partitions between areas of differing temperature.« less
Alcoa: Plant-Wide Energy Assessment Finds Potential Savings at Aluminum Extrusion Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2003-09-01
Alcoa completed an energy assessment of its Engineered Products aluminum extrusion facility in Plant City, Florida, in 2001. The company identified energy conservation opportunities throughout the plant and prepared a report as an example for performing energy assessments at similar Alcoa facilities. If implemented, the cost of energy for the plant would be reduced by more than $800,000 per year by conserving 3 million kWh of electricity and 150,000 MMBtu of natural gas.
The Nike Laser Facility and its Capabilities
NASA Astrophysics Data System (ADS)
Serlin, V.; Aglitskiy, Y.; Chan, L. Y.; Karasik, M.; Kehne, D. M.; Oh, J.; Obenschain, S. P.; Weaver, J. L.
2013-10-01
The Nike laser is a 56-beam krypton fluoride (KrF) system that provides 3 to 4 kJ of laser energy on target. The laser uses induced spatial incoherence to achieve highly uniform focal distributions. 44 beams are overlapped onto target with peak intensities up to 1016 W/cm2. The effective time-averaged illumination nonuniformity is < 0 . 2 %. Nike produces highly uniform ablation pressures on target allowing well-controlled experiments at pressures up to 20 Mbar. The other 12 laser beams are used to generate diagnostic x-rays radiographing the primary laser-illuminated target. The facility includes a front end that generates the desired temporal and spatial laser profiles, two electron-beam pumped KrF amplifiers, a computer-controlled optical system, and a vacuum target chamber for experiments. Nike is used to study the physics and technology issues of direct-drive laser fusion, such as, hydrodynamic and laser-plasma instabilities, studies of the response of materials to extreme pressures, and generation of X rays from laser-heated targets. Nike features a computer-controlled data acquisition system, high-speed, high-resolution x-ray and visible imaging systems, x-ray and visible spectrometers, and cryogenic target capability. Work supported by DOE/NNSA.
Dynamic Discharge Arc Driver. [computerized simulation
NASA Technical Reports Server (NTRS)
Dannenberg, R. E.; Slapnicar, P. I.
1975-01-01
A computer program using nonlinear RLC circuit analysis was developed to accurately model the electrical discharge performance of the Ames 1-MJ energy storage and arc-driver system. Solutions of circuit parameters are compared with experimental circuit data and related to shock speed measurements. Computer analysis led to the concept of a Dynamic Discharge Arc Driver (DDAD) capable of increasing the range of operation of shock-driven facilities. Utilization of mass addition of the driver gas offers a unique means of improving driver performance. Mass addition acts to increase the arc resistance, which results in better electrical circuit damping with more efficient Joule heating, producing stronger shock waves. Preliminary tests resulted in an increase in shock Mach number from 34 to 39 in air at an initial pressure of 2.5 torr.
Nuclear-Recoil Differential Cross Sections for the Two Photon Double Ionization of Helium
NASA Astrophysics Data System (ADS)
Abdel Naby, Shahin; Ciappina, M. F.; Lee, T. G.; Pindzola, M. S.; Colgan, J.
2013-05-01
In support of the reaction microscope measurements at the free-electron laser facility at Hamburg (FLASH), we use the time-dependent close-coupling method (TDCC) to calculate fully differential nuclear-recoil cross sections for the two-photon double ionization of He at photon energy of 44 eV. The total cross section for the double ionization is in good agreement with previous calculations. The nuclear-recoil distribution is in good agreement with the experimental measurements. In contrast to the single-photon double ionization, maximum nuclear recoil triple differential cross section is obtained at small nuclear momenta. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California and the National Institute for Computational Sciences in Knoxville, Tennessee.
Physics through the 1990s: Scientific interfaces and technological applications
NASA Technical Reports Server (NTRS)
1986-01-01
The volume examines the scientific interfaces and technological applications of physics. Twelve areas are dealt with: biological physics-biophysics, the brain, and theoretical biology; the physics-chemistry interface-instrumentation, surfaces, neutron and synchrotron radiation, polymers, organic electronic materials; materials science; geophysics-tectonics, the atmosphere and oceans, planets, drilling and seismic exploration, and remote sensing; computational physics-complex systems and applications in basic research; mathematics-field theory and chaos; microelectronics-integrated circuits, miniaturization, future trends; optical information technologies-fiber optics and photonics; instrumentation; physics applications to energy needs and the environment; national security-devices, weapons, and arms control; medical physics-radiology, ultrasonics, MNR, and photonics. An executive summary and many chapters contain recommendations regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs.
Energy Efficiency Strategies for Municipal Wastewater Treatment Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daw, J.; Hallett, K.; DeWolfe, J.
2012-01-01
Water and wastewater systems are significant energy consumers with an estimated 3%-4% of total U.S. electricity consumption used for the movement and treatment of water and wastewater. Water-energy issues are of growing importance in the context of water shortages, higher energy and material costs, and a changing climate. In this economic environment, it is in the best interest for utilities to find efficiencies, both in water and energy use. Performing energy audits at water and wastewater treatment facilities is one way community energy managers can identify opportunities to save money, energy, and water. In this paper the importance of energymore » use in wastewater facilities is illustrated by a case study of a process energy audit performed for Crested Butte, Colorado's wastewater treatment plant. The energy audit identified opportunities for significant energy savings by looking at power intensive unit processes such as influent pumping, aeration, ultraviolet disinfection, and solids handling. This case study presents best practices that can be readily adopted by facility managers in their pursuit of energy and financial savings in water and wastewater treatment. This paper is intended to improve community energy managers understanding of the role that the water and wastewater sector plays in a community's total energy consumption. The energy efficiency strategies described provide information on energy savings opportunities, which can be used as a basis for discussing energy management goals with water and wastewater treatment facility managers.« less
Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center
NASA Astrophysics Data System (ADS)
Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.
2012-12-01
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.
Computer code for analyzing the performance of aquifer thermal energy storage systems
NASA Astrophysics Data System (ADS)
Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.
1985-05-01
A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.
Evaluation and Selection of Renewable Energy Technologies for Highway Maintenance Facilities
NASA Astrophysics Data System (ADS)
Andrews, Taylor
The interest in renewable energy has been increasing in recent years as attempts to reduce energy costs as well the consumption of fossil fuels are becoming more common. Companies and organizations are recognizing the increasing reliance on limited fossil fuels' resources, and as competition and costs for these resources grow, alternative solutions are becoming more appealing. Many federally run buildings and associations also have the added pressure of meeting the mandates of federal energy policies that dictate specific savings or reductions. Federal highway maintenance facilities run by the Department of Transportation fall into this category. To help meet energy saving goals, an investigation into potential renewable energy technologies was completed for the Ohio Department of Transportation. This research examined several types of renewable energy technologies and the major factors that affect their performance and evaluated their potential for implementation at highway maintenance facilities. Facilities energy usage data were provided, and a facility survey and site visits were completed to enhance the evaluation of technologies and the suitability for specific projects. Findings and technology recommendations were presented in the form of selection matrices, which were designed to help make selections in future projects. The benefits of utilization of other tools such as analysis software and life cycle assessments were also highlighted. These selection tools were designed to be helpful guides when beginning the pursuit of a renewable energy technology for highway maintenance facilities, and can be applied to other similar building types and projects. This document further discusses the research strategies and findings as well as the recommendations that were made to the personnel overseeing Ohio's highway maintenance facilities.
Facilities Upgrade and Retrofit. Strategies for Success.
ERIC Educational Resources Information Center
Kennedy, Mike
2000-01-01
Provides three articles on the subject of educational facility upgrading and retrofiting that address setting guidelines for classroom acoustics, making sports facilities brighter and more energy-efficient, and cutting energy bills and protecting interiors. (GR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron; Shank, James; Ernst, Michael
Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less
Energy Systems Sensor Laboratory | Energy Systems Integration Facility |
NREL Sensor Laboratory Energy Systems Sensor Laboratory The Energy Systems Integration Facility's Energy Systems Sensor Laboratory is designed to support research, development, testing, and evaluation of advanced hydrogen sensor technologies to support the needs of the emerging hydrogen
NASA Astrophysics Data System (ADS)
Khan, Md. Abdul
2014-09-01
In this paper, energies of the low-lying bound S-states (L = 0) of exotic three-body systems, consisting a nuclear core of charge +Ze (Z being atomic number of the core) and two negatively charged valence muons, have been calculated by hyperspherical harmonics expansion method (HHEM). The three-body Schrödinger equation is solved assuming purely Coulomb interaction among the binary pairs of the three-body systems XZ+μ-μ- for Z = 1 to 54. Convergence pattern of the energies have been checked with respect to the increasing number of partial waves Λmax. For available computer facilities, calculations are feasible up to Λmax = 28 partial waves, however, calculation for still higher partial waves have been achieved through an appropriate extrapolation scheme. The dependence of bound state energies has been checked against increasing nuclear charge Z and finally, the calculated energies have been compared with the ones of the literature.
Valuing uncertain cash flows from investments that enhance energy efficiency.
Abadie, Luis M; Chamorro, José M; González-Eguino, Mikel
2013-02-15
There is a broad consensus that investments to enhance energy efficiency quickly pay for themselves in lower energy bills and spared emission allowances. However, investments that at first glance seem worthwhile usually are not undertaken. One of the plausible, non-excluding explanations is the numerous uncertainties that these investments face. This paper deals with the optimal time to invest in an energy efficiency enhancement at a facility already in place that consumes huge amounts of a fossil fuel (coal) and operates under carbon constraints. We follow the Real Options approach. Our model comprises three sources of uncertainty following different stochastic processes which allows for application in a broad range of settings. We assess the investment option by means of a three-dimensional binomial lattice. We compute the trigger investment cost, i.e., the threshold level below which immediate investment would be optimal. We analyze the major drivers of this decision thus aiming at the most promising policies in this regard. Copyright © 2012 Elsevier Ltd. All rights reserved.
Visualizing Coolant Flow in Sodium Reactor Subassemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-01-01
Uniformity of temperature controls peak power output. Interchannel cross-flow is the principal cross-assembly energy transport mechanism. The areas of fastest flow all occur at the exterior of the assembly. Further, the fast moving region winds around the assembly in a continuous swath. This Nek5000 simulation uses an unstructured mesh with over one billion grid points, resulting in five billion degrees of freedom per time slice. High speed patches of turbulence due to vertex shedding downstream of the wires persist for about a quarter of the wire-wrap periodic length. Credits: Science: Paul Fisher and Aleks Obabko, Argonne National Laboratory. Visualization: Hankmore » Childs and Janet Jacobsen, Lawrence Berkeley National Laboratory. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Dept. of Energy under contract DE-AC02-06CH11357. This research was sponsored by the Department of Energy's Office of Nuclear Energy's NEAMS program.« less
Subscribe to the Energy Systems Integration Newsletter | Energy Systems
Integration Facility | NREL Subscribe to the Energy Systems Integration Newsletter Subscribe to the Energy Systems Integration Newsletter Subscribe to receive regular updates on what's happening at the Energy Systems Integration Facility and in energy systems integration research at NREL and around
Code of Federal Regulations, 2014 CFR
2014-04-01
... PRODUCTION AND COGENERATION Exemption of Qualifying Small Power Production Facilities and Cogeneration... small power production facility with a power production capacity over 30 megawatts if such facility produces electric energy solely by the use of biomass as a primary energy source. (b) Exemption from the...
Code of Federal Regulations, 2012 CFR
2012-04-01
... PRODUCTION AND COGENERATION Exemption of Qualifying Small Power Production Facilities and Cogeneration... small power production facility with a power production capacity over 30 megawatts if such facility produces electric energy solely by the use of biomass as a primary energy source. (b) Exemption from the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... PRODUCTION AND COGENERATION Exemption of Qualifying Small Power Production Facilities and Cogeneration... small power production facility with a power production capacity over 30 megawatts if such facility produces electric energy solely by the use of biomass as a primary energy source. (b) Exemption from the...
NASA Astrophysics Data System (ADS)
Wang, Jianxiong
2014-06-01
This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF
Ethics and the 7 `P`s` of computer use policies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, T.J.; Voss, R.B.
1994-12-31
A Computer Use Policy (CUP) defines who can use the computer facilities for what. The CUP is the institution`s official position on the ethical use of computer facilities. The authors believe that writing a CUP provides an ideal platform to develop a group ethic for computer users. In prior research, the authors have developed a seven phase model for writing CUPs, entitled the 7 P`s of Computer Use Policies. The purpose of this paper is to present the model and discuss how the 7 P`s can be used to identify and communicate a group ethic for the institution`s computer users.
National resource for computation in chemistry, phase I: evaluation and recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-05-01
The National Resource for Computation in Chemistry (NRCC) was inaugurated at the Lawrence Berkeley Laboratory (LBL) in October 1977, with joint funding by the Department of Energy (DOE) and the National Science Foundation (NSF). The chief activities of the NRCC include: assembling a staff of eight postdoctoral computational chemists, establishing an office complex at LBL, purchasing a midi-computer and graphics display system, administering grants of computer time, conducting nine workshops in selected areas of computational chemistry, compiling a library of computer programs with adaptations and improvements, initiating a software distribution system, providing user assistance and consultation on request. This reportmore » presents assessments and recommendations of an Ad Hoc Review Committee appointed by the DOE and NSF in January 1980. The recommendations are that NRCC should: (1) not fund grants for computing time or research but leave that to the relevant agencies, (2) continue the Workshop Program in a mode similar to Phase I, (3) abandon in-house program development and establish instead a competitive external postdoctoral program in chemistry software development administered by the Policy Board and Director, and (4) not attempt a software distribution system (leaving that function to the QCPE). Furthermore, (5) DOE should continue to make its computational facilities available to outside users (at normal cost rates) and should find some way to allow the chemical community to gain occasional access to a CRAY-level computer.« less
Transportation Energy - Sandia Energy
; Components Compatibility Hydrogen Behavior Quantitative Risk Assessment Technical Reference for Hydrogen Combustion jbei Facilities Algae Testbed Battery Abuse Testing Laboratory Center for Infrastructure Research and Innovation Combustion Research Facility Joint BioEnergy Institute Close Energy Research Programs
Haselden/RNL - Research Support Facility Documentary
Haselden, Byron; Baker, Jeff; Glover, Bill; von Luhrte, Rich; Randock, Craig; Andary, John; Macey, Philip; Okada, David
2017-12-12
The US Department of Energy's (DOE) Research Support Facility (RSF) on the campus of the National Renewable Energy Laboratory is positioned to be one of the most energy efficient buildings in the world. It will demonstrate NREL's role in moving advanced technologies and transferring knowledge into commercial applications. Because 19 percent of the country's energy is used by commercial buildings, DOE plans to make this facility a showcase for energy efficiency. DOE hopes the design of the RSF will be replicated by the building industry and help reduce the nation's energy consumption by changing the way commercial buildings are designed and built.
Industrial Facility Combustion Energy Use
McMillan, Colin
2016-08-01
Facility-level industrial combustion energy use is calculated from greenhouse gas emissions data reported by large emitters (>25,000 metric tons CO2e per year) under the U.S. EPA's Greenhouse Gas Reporting Program (GHGRP, https://www.epa.gov/ghgreporting). The calculation applies EPA default emissions factors to reported fuel use by fuel type. Additional facility information is included with calculated combustion energy values, such as industry type (six-digit NAICS code), location (lat, long, zip code, county, and state), combustion unit type, and combustion unit name. Further identification of combustion energy use is provided by calculating energy end use (e.g., conventional boiler use, co-generation/CHP use, process heating, other facility support) by manufacturing NAICS code. Manufacturing facilities are matched by their NAICS code and reported fuel type with the proportion of combustion fuel energy for each end use category identified in the 2010 Energy Information Administration Manufacturing Energy Consumption Survey (MECS, http://www.eia.gov/consumption/manufacturing/data/2010/). MECS data are adjusted to account for data that were withheld or whose end use was unspecified following the procedure described in Fox, Don B., Daniel Sutter, and Jefferson W. Tester. 2011. The Thermal Spectrum of Low-Temperature Energy Use in the United States, NY: Cornell Energy Institute.
Energy Conscious Design: Educational Facilities. [Brief No.] 1.
ERIC Educational Resources Information Center
American Inst. of Architects, Washington, DC.
An energy task group of the American Institute of Architects discusses design features and options that educational facility designers can use to create an energy efficient school building. Design elements cover the building envelope, energy storage system, hydronic heating/cooling systems, solar energy collection, building orientation and shape,…
Energy Systems Integration News | Energy Systems Integration Facility |
NREL News Energy Systems Integration News A monthly recap of the latest happenings at the Energy Systems Integration Facility and developments in energy systems integration (ESI) research at NREL ; said Vahan Gevorgian, chief engineer with NREL's Power Systems Engineering Center. "Results of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pittman, Jeffery P.; Cassidy, Stephen R.; Mosey, Whitney LC
2013-07-31
Pacific Northwest National Laboratory (PNNL) and the Pacific Northwest Site Office (PNSO) have recently completed an effort to identify the current state of the campus and gaps that exist with regards to space needs, facilities and infrastructure. This effort has been used to establish a campus strategy to ensure PNNL is ready to further the United States (U.S.) Department of Energy (DOE) mission. Ten-year business projections and the impacts on space needs were assessed and incorporated into the long-term facility plans. In identifying/quantifying the space needs for PNNL, the following categories were addressed: Multi-purpose Programmatic (wet chemistry and imaging laboratorymore » space), Strategic (Systems Engineering and Computation Analytics, and Collaboration space), Remediation (space to offset the loss of the Research Technology Laboratory [RTL] Complex due to decontamination and demolition), and Optimization (the exit of older and less cost-effective facilities). The findings of the space assessment indicate a need for wet chemistry space, imaging space, and strategic space needs associated with systems engineering and collaboration space.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2003-09-01
Alcoa completed an energy assessment of its Engineered Products aluminum extrusion facility in Plant City, Florida, in 2001. The company identified energy conservation opportunities throughout the plant and prepared a report as an example for performing energy assessments at similar Alcoa facilities. If implemented, the cost of energy for the plant would be reduced by more than$800,000 per year by conserving 3 million kWh of electricity and 150,000 MMBtu of natural gas.
ESIF Call for High-Impact Integrated Projects | Energy Systems Integration
Integrated Projects As a U.S. Department of Energy user facility, the Energy Systems Integration Facility concepts, tools, and technologies needed to measure, analyze, predict, protect, and control the grid of the Facility | NREL ESIF Call for High-Impact Integrated Projects ESIF Call for High-Impact
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false How does DOE notify persons and entities that defense nuclear facility real property is available for transfer for economic development? 770.5 Section 770.5 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false May interested persons and entities request that real property at defense nuclear facilities be transferred for economic development? 770.6 Section 770.6 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC DEVELOPMENT § 770.6...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false May DOE transfer real property at defense nuclear facilities for economic development at less than fair market value? 770.8 Section 770.8 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC DEVELOPMENT § 770.8 May DOE...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false How does DOE notify persons and entities that defense nuclear facility real property is available for transfer for economic development? 770.5 Section 770.5 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false What procedures are to be used to transfer real property at defense nuclear facilities for economic development? 770.7 Section 770.7 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC DEVELOPMENT § 770.7 What...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false May DOE transfer real property at defense nuclear facilities for economic development at less than fair market value? 770.8 Section 770.8 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC DEVELOPMENT § 770.8 May DOE...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false May interested persons and entities request that real property at defense nuclear facilities be transferred for economic development? 770.6 Section 770.6 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC DEVELOPMENT § 770.6...
10 CFR 770.1 - What is the purpose of this part?
Code of Federal Regulations, 2013 CFR
2013-01-01
... Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC... or lease real property at defense nuclear facilities for economic development. (b) This part also... DOE activities at the defense nuclear facility. ...
Evaluation of renewable energy alternatives for highway maintenance facilities.
DOT National Transportation Integrated Search
2013-12-01
A considerable annual energy budget is used for heating, lighting, cooling and operating ODOT : maintenance facilities. Such facilities contain vehicle repair and garage bays, which are large open : spaces with high heating demand in winter. The main...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Brian; Gutowska, Izabela; Chiger, Howard
Computer simulations of nuclear reactor thermal-hydraulic phenomena are often used in the design and licensing of nuclear reactor systems. In order to assess the accuracy of these computer simulations, computer codes and methods are often validated against experimental data. This experimental data must be of sufficiently high quality in order to conduct a robust validation exercise. In addition, this experimental data is generally collected at experimental facilities that are of a smaller scale than the reactor systems that are being simulated due to cost considerations. Therefore, smaller scale test facilities must be designed and constructed in such a fashion tomore » ensure that the prototypical behavior of a particular nuclear reactor system is preserved. The work completed through this project has resulted in scaling analyses and conceptual design development for a test facility capable of collecting code validation data for the following high temperature gas reactor systems and events— 1. Passive natural circulation core cooling system, 2. pebble bed gas reactor concept, 3. General Atomics Energy Multiplier Module reactor, and 4. prismatic block design steam-water ingress event. In the event that code validation data for these systems or events is needed in the future, significant progress in the design of an appropriate integral-type test facility has already been completed as a result of this project. Where applicable, the next step would be to begin the detailed design development and material procurement. As part of this project applicable scaling analyses were completed and test facility design requirements developed. Conceptual designs were developed for the implementation of these design requirements at the Oregon State University (OSU) High Temperature Test Facility (HTTF). The original HTTF is based on a ¼-scale model of a high temperature gas reactor concept with the capability for both forced and natural circulation flow through a prismatic core with an electrical heat source. The peak core region temperature capability is 1400°C. As part of this project, an inventory of test facilities that could be used for these experimental programs was completed. Several of these facilities showed some promise, however, upon further investigation it became clear that only the OSU HTTF had the power and/or peak temperature limits that would allow for the experimental programs envisioned herein. Thus the conceptual design and feasibility study development focused on examining the feasibility of configuring the current HTTF to collect validation data for these experimental programs. In addition to the scaling analyses and conceptual design development, a test plan was developed for the envisioned modified test facility. This test plan included a discussion on an appropriate shakedown test program as well as the specific matrix tests. Finally, a feasibility study was completed to determine the cost and schedule considerations that would be important to any test program developed to investigate these designs and events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, John A.; Looman, Marc R.; Poundall, Adam J.
2013-07-01
This paper describes the measurements, testing and performance validation of a sensitive gamma ray camera designed for radiation detection and quantification in the environment and decommissioning and hold-up measurements in nuclear facilities. The instrument, which is known as RadSearch, combines a sensitive and highly collimated LaBr{sub 3} scintillation detector with an optical (video) camera with controllable zoom and focus and a laser range finder in one detector head. The LaBr{sub 3} detector has a typical energy resolution of between 2.5% and 3% at the 662 keV energy of Cs-137 compared to that of NaI detectors with a resolution of typicallymore » 7% to 8% at the same energy. At this energy the tungsten shielding of the detector provides a shielding ratio of greater than 900:1 in the forward direction and 100:1 on the sides and from the rear. The detector head is mounted on a pan/tile mechanism with a range of motion of ±180 degrees (pan) and ±90 degrees (tilt) equivalent to 4 π steradians. The detector head with pan/tilt is normally mounted on a tripod or wheeled cart. It can also be mounted on vehicles or a mobile robot for access to high dose-rate areas and areas with high levels of contamination. Ethernet connects RadSearch to a ruggedized notebook computer from which it is operated and controlled. Power can be supplied either as 24-volts DC from a battery or as 50 volts DC supplied by a small mains (110 or 230 VAC) power supply unit that is co-located with the controlling notebook computer. In this latter case both power and Ethernet are supplied through a single cable that can be up to 80 metres in length. If a local battery supplies power, the unit can be controlled through wireless Ethernet. Both manual operation and automatic scanning of surfaces and objects is available through the software interface on the notebook computer. For each scan element making up a part of an overall scanned area, the unit measures a gamma ray spectrum. Multiple radionuclides may be selected by the operator and will be identified if present. In scanning operation the unit scans a designated region and superimposes over a video image the distribution of measured radioactivity. For the total scanned area or object RadSearch determines the total activity of operator selected radionuclides present and the gamma dose-rate measured at the detector head. Results of hold-up measurements made in a nuclear facility are presented, as are test measurements of point sources distributed arbitrarily on surfaces. These latter results are compared with the results of benchmarked MCNP Monte Carlo calculations. The use of the device for hold-up and decommissioning measurements is validated. (authors)« less
Active Oxygen Vacancy Site for Methanol Synthesis from CO2 Hydrogenation on In2O3(110): A DFT Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Jingyun; Liu, Changjun; Mei, Donghai
2013-06-03
Methanol synthesis from CO2 hydrogenation on the defective In2O3(110) surface with surface oxygen vacancies has been investigated using periodic density functional theory calculations. The relative stabilities of six possible surface oxygen vacancies numbered from Ov1 to Ov6 on the perfect In2O3(110) surface were examined. The calculated oxygen vacancy formation energies show that the D1 surface with the Ov1 defective site is the most thermodynamically favorable while the D4 surface with the Ov4 defective site is the least stable. Two different methanol synthesis routes from CO2 hydrogenation over both D1 and D4 surfaces were studied and the D4 surface was foundmore » to be more favorable for CO2 activation and hydrogenation. On the D4 surface, one of the O atoms of the CO2 molecule fills in the Ov4 site upon adsorption. Hydrogenation of CO2 to HCOO on the D4 surface is both thermodynamically and kinetically favorable. Further hydrogenation of HCOO involves both forming the C-H bond and breaking the C-O bond, resulting in H2CO and hydroxyl. The HCOO hydrogenation is slightly endothermic with an activation barrier of 0.57 eV. A high barrier of 1.14 eV for the hydrogenation of H2CO to H3CO indicates that this step is the rate-limiting step in the methanol synthesis on the defective In2O3(110) surface. We gratefully acknowledge the supports from the National Natural Science Foundation of China (#20990223) and from US Department of Energy, Basic Energy Science program (DE-FG02-05ER46231). D. Mei was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. The computations were performed in part using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at Pacific Northwest National Laboratory in Richland, Washington. PNNL is a multiprogram national laboratory operated for DOE by Battelle.« less
Energy Fact Sheets - Sandia Energy
; Components Compatibility Hydrogen Behavior Quantitative Risk Assessment Technical Reference for Hydrogen Combustion jbei Facilities Algae Testbed Battery Abuse Testing Laboratory Center for Infrastructure Research and Innovation Combustion Research Facility Joint BioEnergy Institute Close Energy Research Programs
Expanding the Scope of High-Performance Computing Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uram, Thomas D.; Papka, Michael E.
The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.
Managing Energy in Your Educational Facility.
ERIC Educational Resources Information Center
2001
This booklet explains how to develop and implement a plan to manage energy in educational facilities. It can be used to identify energy savings opportunities and implement a plan to reduce energy costs. It discusses the following steps for creating an effective energy-use plan: (1) get started and organize for success; (2) look at energy use and…
Code of Federal Regulations, 2013 CFR
2013-01-01
... property at defense nuclear facilities be transferred for economic development? 770.6 Section 770.6 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC DEVELOPMENT § 770.6 May interested persons and entities request that real property at defense nuclear facilities be...
Code of Federal Regulations, 2014 CFR
2014-01-01
... property at defense nuclear facilities be transferred for economic development? 770.6 Section 770.6 Energy DEPARTMENT OF ENERGY TRANSFER OF REAL PROPERTY AT DEFENSE NUCLEAR FACILITIES FOR ECONOMIC DEVELOPMENT § 770.6 May interested persons and entities request that real property at defense nuclear facilities be...
Final Design Report for the RH LLW Disposal Facility (RDF) Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Austad, Stephanie Lee
2015-09-01
The RH LLW Disposal Facility (RDF) Project was designed by AREVA Federal Services (AFS) and the design process was managed by Battelle Energy Alliance (BEA) for the Department of Energy (DOE). The final design report for the RH LLW Disposal Facility Project is a compilation of the documents and deliverables included in the facility final design.
Final Design Report for the RH LLW Disposal Facility (RDF) Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Austad, S. L.
2015-05-01
The RH LLW Disposal Facility (RDF) Project was designed by AREVA Federal Services (AFS) and the design process was managed by Battelle Energy Alliance (BEA) for the Department of Energy (DOE). The final design report for the RH LLW Disposal Facility Project is a compilation of the documents and deliverables included in the facility final design.
Asah, Flora
2013-04-01
This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.
High resolution wind turbine wake measurements with a scanning lidar
NASA Astrophysics Data System (ADS)
Herges, T. G.; Maniaci, D. C.; Naughton, B. T.; Mikkelsen, T.; Sjöholm, M.
2017-05-01
High-resolution lidar wake measurements are part of an ongoing field campaign being conducted at the Scaled Wind Farm Technology facility by Sandia National Laboratories and the National Renewable Energy Laboratory using a customized scanning lidar from the Technical University of Denmark. One of the primary objectives is to collect experimental data to improve the predictive capability of wind plant computational models to represent the response of the turbine wake to varying inflow conditions and turbine operating states. The present work summarizes the experimental setup and illustrates several wake measurement example cases. The cases focus on demonstrating the impact of the atmospheric conditions on the wake shape and position, and exhibit a sample of the data that has been made public through the Department of Energy Atmosphere to Electrons Data Archive and Portal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theys, M.
1994-05-06
Beamlet is a high power laser currently being built at Lawrence Livermore National Lab as a proof of concept for the National Ignition Facility (NIF). Beamlet is testing several areas of laser advancements, such as a 37cm Pockels cell, square amplifier, and propagation of a square beam. The diagnostics on beamlet tell the operators how much energy the beam has in different locations, the pulse shape, the energy distribution, and other important information regarding the beam. This information is being used to evaluate new amplifier designs, and extrapolate performance to the NIF laser. In my term at Lawrence Livermore Nationalmore » Laboratory I have designed and built a diagnostic, calibrated instruments used on diagnostics, setup instruments, hooked up communication lines to the instruments, and setup computers to control specific diagnostics.« less
Sandia Technology engineering and science accomplishments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report briefly discusses the following research being conducted at Sandia Laboratories: Advanced Manufacturing -- Sandia technology helps keep US industry in the lead; Microelectronics-Sandia`s unique facilities transform research advances into manufacturable products; Energy -- Sandia`s energy programs focus on strengthening industrial growth and political decisionmaking; Environment -- Sandia is a leader in environmentally conscious manufacturing and hazardous waste reduction; Health Care -- New biomedical technologies help reduce cost and improve quality of health care; Information & Computation -- Sandia aims to help make the information age a reality; Transportation -- This new initiative at the Labs will help improvemore » transportation, safety,l efficiency, and economy; Nonproliferation -- Dismantlement and arms control are major areas of emphasis at Sandia; and Awards and Patents -- Talented, dedicated employees are the backbone of Sandia`s success.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharrati, Hedi
2005-05-01
In this study, a new approach has been introduced for derivation of the effective dose from air kerma to calculate shielding requirements in mammography facilities. This new approach has been used to compute the conversion coefficients relating air kerma to the effective dose for the mammography reference beam series of the Netherlands Metrology Institute Van Swinden Laboratorium, National Institute of Standards and Technology, and International Atomic Energy Agency laboratories. The results show that, in all cases, the effective dose in mammography energy range is less than 25% of the incident air kerma for the primary and the scatter radiations andmore » does not exceed 75% for the leakage radiation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The High Ranking Facilities Deactivation Project (HRFDP), commissioned by the US Department of Energy Nuclear Materials and Facility Stabilization Program, is to place four primary high-risk surplus facilities with 28 associated ancillary facilities at Oak Ridge National Laboratory in a safe, stable, and environmentally sound condition as rapidly and economically as possible. The facilities will be deactivated and left in a condition suitable for an extended period of minimized surveillance and maintenance (S and M) prior to decontaminating and decommissioning (D and D). These four facilities include two reactor facilities containing spent fuel. One of these reactor facilities also containsmore » 55 tons of sodium with approximately 34 tons containing activated sodium-22, 2.5 tons of lithium hydride, approximately 100 tons of potentially contaminated lead, and several other hazardous materials as well as bulk quantities of contaminated scrap metals. The other two facilities to be transferred include a facility with a bank of hot cells containing high levels of transferable contamination and also a facility containing significant quantities of uranyl nitrate and quantities of transferable contamination. This work plan documents the objectives, technical requirements, and detailed work plans--including preliminary schedules, milestones, and conceptual FY 1996 cost estimates--for the Oak Ridge National Laboratory (ORNL). This plan has been developed by the Environmental Restoration (ER) Program of Lockheed Martin Energy Systems (Energy Systems) for the US Department of Energy (DOE) Oak Ridge Operations Office (ORO).« less