Energy and Environment Guide to Action - Chapter 4.3: Building Codes for Energy Efficiency
Provides guidance and recommendations for establishing, implementing, and evaluating state building codes for energy efficiency, which improve energy efficiency in new construction and major renovations. State success stories are included for reference.
75 FR 20833 - Building Energy Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-21
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2010-BT-BC-0012] Building Energy Codes AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Request for Information. SUMMARY: The U.S. Department of Energy (DOE) is soliciting...
Energy Efficiency Program Administrators and Building Energy Codes
Explore how energy efficiency program administrators have helped advance building energy codes at federal, state, and local levels—using technical, institutional, financial, and other resources—and discusses potential next steps.
Residential Building Energy Code Field Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Bartlett, M. Halverson, V. Mendon, J. Hathaway, Y. Xie
This document presents a methodology for assessing baseline energy efficiency in new single-family residential buildings and quantifying related savings potential. The approach was developed by Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE) Building Energy Codes Program with the objective of assisting states as they assess energy efficiency in residential buildings and implementation of their building energy codes, as well as to target areas for improvement through energy codes and broader energy-efficiency programs. It is also intended to facilitate a consistent and replicable approach to research studies of this type and establish a transparent data setmore » to represent baseline construction practices across U.S. states.« less
Overcoming Codes and Standards Barriers to Innovations in Building Energy Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Pamala C.; Gilbride, Theresa L.
2015-02-15
In this journal article, the authors discuss approaches to overcoming building code barriers to energy-efficiency innovations in home construction. Building codes have been a highly motivational force for increasing the energy efficiency of new homes in the United States in recent years. But as quickly as the codes seem to be changing, new products are coming to the market at an even more rapid pace, sometimes offering approaches and construction techniques unthought of when the current code was first proposed, which might have been several years before its adoption by various jurisdictions. Due to this delay, the codes themselves canmore » become barriers to innovations that might otherwise be helping to further increase the efficiency, comfort, health or durability of new homes. . The U.S. Department of Energy’s Building America, a program dedicated to improving the energy efficiency of America’s housing stock through research and education, is working with the U.S. housing industry through its research teams to help builders identify and remove code barriers to innovation in the home construction industry. The article addresses several approaches that builders use to achieve approval for innovative building techniques when code barriers appear to exist.« less
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
National Cost-effectiveness of ASHRAE Standard 90.1-2010 Compared to ASHRAE Standard 90.1-2007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, Brian; Halverson, Mark A.; Myer, Michael
Pacific Northwest National Laboratory (PNNL) completed this project for the U.S. Department of Energy’s (DOE’s) Building Energy Codes Program (BECP). DOE’s BECP supports upgrading building energy codes and standards, and the states’ adoption, implementation, and enforcement of upgraded codes and standards. Building energy codes and standards set minimum requirements for energy-efficient design and construction for new and renovated buildings, and impact energy use and greenhouse gas emissions for the life of buildings. Continuous improvement of building energy efficiency is achieved by periodically upgrading energy codes and standards. Ensuring that changes in the code that may alter costs (for building components,more » initial purchase and installation, replacement, maintenance and energy) are cost-effective encourages their acceptance and implementation. ANSI/ASHRAE/IESNA Standard 90.1 is the energy standard for commercial and multi-family residential buildings over three floors.« less
Cost-effectiveness of ASHRAE Standard 90.1-2010 Compared to ASHRAE Standard 90.1-2007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, Brian A.; Halverson, Mark A.; Myer, Michael
Pacific Northwest National Laboratory (PNNL) completed this project for the U.S. Department of Energy’s (DOE’s) Building Energy Codes Program (BECP). DOE’s BECP supports upgrading building energy codes and standards, and the states’ adoption, implementation, and enforcement of upgraded codes and standards. Building energy codes and standards set minimum requirements for energy-efficient design and construction for new and renovated buildings, and impact energy use and greenhouse gas emissions for the life of buildings. Continuous improvement of building energy efficiency is achieved by periodically upgrading energy codes and standards. Ensuring that changes in the code that may alter costs (for building components,more » initial purchase and installation, replacement, maintenance and energy) are cost-effective encourages their acceptance and implementation. ANSI/ASHRAE/IESNA Standard 90.1 is the energy standard for commercial and multi-family residential buildings over three floors.« less
Building Energy Codes: Policy Overview and Good Practices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Sadie
2016-02-19
Globally, 32% of total final energy consumption is attributed to the building sector. To reduce energy consumption, energy codes set minimum energy efficiency standards for the building sector. With effective implementation, building energy codes can support energy cost savings and complementary benefits associated with electricity reliability, air quality improvement, greenhouse gas emission reduction, increased comfort, and economic and social development. This policy brief seeks to support building code policymakers and implementers in designing effective building code programs.
A long-term, integrated impact assessment of alternative building energy code scenarios in China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Eom, Jiyong; Evans, Meredydd
2014-04-01
China is the second largest building energy user in the world, ranking first and third in residential and commercial energy consumption. Beginning in the early 1980s, the Chinese government has developed a variety of building energy codes to improve building energy efficiency and reduce total energy demand. This paper studies the impact of building energy codes on energy use and CO2 emissions by using a detailed building energy model that represents four distinct climate zones each with three building types, nested in a long-term integrated assessment framework GCAM. An advanced building stock module, coupled with the building energy model, ismore » developed to reflect the characteristics of future building stock and its interaction with the development of building energy codes in China. This paper also evaluates the impacts of building codes on building energy demand in the presence of economy-wide carbon policy. We find that building energy codes would reduce Chinese building energy use by 13% - 22% depending on building code scenarios, with a similar effect preserved even under the carbon policy. The impact of building energy codes shows regional and sectoral variation due to regionally differentiated responses of heating and cooling services to shell efficiency improvement.« less
Country Report on Building Energy Codes in Australia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shui, Bin; Evans, Meredydd; Somasundaram, Sriram
2009-04-02
This report is part of a series of reports on building energy efficiency codes in countries associated with the Asian Pacific Partnership (APP) - Australia, South Korea, Japan, China, India, and the United States of America (U.S.). This reports gives an overview of the development of building energy codes in Australia, including national energy policies related to building energy codes, history of building energy codes, recent national projects and activities to promote building energy codes. The report also provides a review of current building energy codes (such as building envelope, HVAC, and lighting) for commercial and residential buildings in Australia.
Increasing Flexibility in Energy Code Compliance: Performance Packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Rosenberg, Michael I.
Energy codes and standards have provided significant increases in building efficiency over the last 38 years, since the first national energy code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. As the code matures, the prescriptive path becomes more complicated, and also more restrictive. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. Performance code paths are increasing in popularity; however, there remains a significant designmore » team overhead in following the performance path, especially for smaller buildings. This paper focuses on development of one alternative format, prescriptive packages. A method to develop building-specific prescriptive packages is reviewed based on a multiple runs of prototypical building models that are used to develop parametric decision analysis to determines a set of packages with equivalent energy performance. The approach is designed to be cost-effective and flexible for the design team while achieving a desired level of energy efficiency performance. A demonstration of the approach based on mid-sized office buildings with two HVAC system types is shown along with a discussion of potential applicability in the energy code process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Tan, Qing; Evans, Meredydd
India is expected to add 40 billion m2 of new buildings till 2050. Buildings are responsible for one third of India’s total energy consumption today and building energy use is expected to continue growing driven by rapid income and population growth. The implementation of the Energy Conservation Building Code (ECBC) is one of the measures to improve building energy efficiency. Using the Global Change Assessment Model, this study assesses growth in the buildings sector and impacts of building energy policies in Gujarat, which would help the state adopt ECBC and expand building energy efficiency programs. Without building energy policies, buildingmore » energy use in Gujarat would grow by 15 times in commercial buildings and 4 times in urban residential buildings between 2010 and 2050. ECBC improves energy efficiency in commercial buildings and could reduce building electricity use in Gujarat by 20% in 2050, compared to the no policy scenario. Having energy codes for both commercial and residential buildings could result in additional 10% savings in electricity use. To achieve these intended savings, it is critical to build capacity and institution for robust code implementation.« less
Country Report on Building Energy Codes in Canada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shui, Bin; Evans, Meredydd
2009-04-06
This report is part of a series of reports on building energy efficiency codes in countries associated with the Asian Pacific Partnership (APP) - Australia, South Korea, Japan, China, India, and the United States of America . This reports gives an overview of the development of building energy codes in Canada, including national energy policies related to building energy codes, history of building energy codes, recent national projects and activities to promote building energy codes. The report also provides a review of current building energy codes (such as building envelope, HVAC, lighting, and water heating) for commercial and residential buildingsmore » in Canada.« less
Alternative Formats to Achieve More Efficient Energy Codes for Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, David R.; Rosenberg, Michael I.; Halverson, Mark A.
2013-01-26
This paper identifies and examines several formats or structures that could be used to create the next generation of more efficient energy codes and standards for commercial buildings. Pacific Northwest National Laboratory (PNNL) is funded by the U.S. Department of Energy’s Building Energy Codes Program (BECP) to provide technical support to the development of ANSI/ASHRAE/IES Standard 90.1. While the majority of PNNL’s ASHRAE Standard 90.1 support focuses on developing and evaluating new requirements, a portion of its work involves consideration of the format of energy standards. In its current working plan, the ASHRAE 90.1 committee has approved an energy goalmore » of 50% improvement in Standard 90.1-2013 relative to Standard 90.1-2004, and will likely be considering higher improvement targets for future versions of the standard. To cost-effectively achieve the 50% goal in manner that can gain stakeholder consensus, formats other than prescriptive must be considered. Alternative formats that include reducing the reliance on prescriptive requirements may make it easier to achieve these aggressive efficiency levels in new codes and standards. The focus on energy code and standard formats is meant to explore approaches to presenting the criteria that will foster compliance, enhance verification, and stimulate innovation while saving energy in buildings. New formats may also make it easier for building designers and owners to design and build the levels of efficiency called for in the new codes and standards. This paper examines a number of potential formats and structures, including prescriptive, performance-based (with sub-formats of performance equivalency and performance targets), capacity constraint-based, and outcome-based. The paper also discusses the pros and cons of each format from the viewpoint of code users and of code enforcers.« less
City Reach Code Technical Support Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Chen, Yan; Zhang, Jian
This report describes and analyzes a set of energy efficiency measures that will save 20% energy over ASHRAE Standard 90.1-2013. The measures will be used to formulate a Reach Code for cities aiming to go beyond national model energy codes. A coalition of U.S. cities together with other stakeholders wanted to facilitate the development of voluntary guidelines and standards that can be implemented in stages at the city level to improve building energy efficiency. The coalition's efforts are being supported by the U.S. Department of Energy via Pacific Northwest National Laboratory (PNNL) and in collaboration with the New Buildings Institute.
78 FR 55245 - Activities and Methodology for Assessing Compliance With Building Energy Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-10
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2013-BT-BC... Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of reopening of public..., Office of Energy Efficiency and Renewable Energy, Building Technologies Program, Mailstop EE-2J, 1000...
78 FR 33838 - DOE Participation in Development of the International Energy Conservation Code
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-05
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2012-BT-BC... Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice and request for comment... Efficiency and Renewable Energy, Building Technologies Office, Mailstop EE-2J, 1000 Independence Avenue SW...
An efficient HZETRN (a galactic cosmic ray transport code)
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.
1992-01-01
An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Yu, Sha; Staniszewski, Aaron
Building energy efficiency is an important strategy for reducing greenhouse gas emissions globally. In fact, 55 countries have included building energy efficiency in their Nationally Determined Contributions (NDCs) under the Paris Agreement. This research uses building energy code implementation in six cities across different continents as case studies to assess what it may take for countries to implement the ambitions of their energy efficiency goals. Specifically, we look at the cases of Bogota, Colombia; Da Nang, Vietnam; Eskisehir, Turkey; Mexico City, Mexico; Rajkot, India; and Tshwane, South Africa, all of which are “deep dive” cities under the Sustainable Energy formore » All's Building Efficiency Accelerator. The research focuses on understanding the baseline with existing gaps in implementation and coordination. The methodology used a combination of surveys on code status and interviews with stakeholders at the local and national level, as well as review of published documents. We looked at code development, implementation, and evaluation. The cities are all working to improve implementation, however, the challenges they currently face include gaps in resources, capacity, tools, and institutions to check for compliance. Better coordination between national and local governments could help improve implementation, but that coordination is not yet well established. For example, all six of the cities reported that there was little to no involvement of local stakeholders in development of the national code; only one city reported that it had access to national funding to support code implementation. More robust coordination could better link cities with capacity building and funding for compliance, and ensure that the code reflects local priorities. By understanding gaps in implementation, it can also help in designing more targeted interventions to scale up energy savings.« less
Evans, Meredydd; Yu, Sha; Staniszewski, Aaron; ...
2018-04-17
Building energy efficiency is an important strategy for reducing greenhouse gas emissions globally. In fact, 55 countries have included building energy efficiency in their Nationally Determined Contributions (NDCs) under the Paris Agreement. This research uses building energy code implementation in six cities across different continents as case studies to assess what it may take for countries to implement the ambitions of their energy efficiency goals. Specifically, we look at the cases of Bogota, Colombia; Da Nang, Vietnam; Eskisehir, Turkey; Mexico City, Mexico; Rajkot, India; and Tshwane, South Africa, all of which are “deep dive” cities under the Sustainable Energy formore » All's Building Efficiency Accelerator. The research focuses on understanding the baseline with existing gaps in implementation and coordination. The methodology used a combination of surveys on code status and interviews with stakeholders at the local and national level, as well as review of published documents. We looked at code development, implementation, and evaluation. The cities are all working to improve implementation, however, the challenges they currently face include gaps in resources, capacity, tools, and institutions to check for compliance. Better coordination between national and local governments could help improve implementation, but that coordination is not yet well established. For example, all six of the cities reported that there was little to no involvement of local stakeholders in development of the national code; only one city reported that it had access to national funding to support code implementation. More robust coordination could better link cities with capacity building and funding for compliance, and ensure that the code reflects local priorities. By understanding gaps in implementation, it can also help in designing more targeted interventions to scale up energy savings.« less
Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo
2018-01-01
Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. Summary We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding. PMID:29773979
Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo
2018-01-01
Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding.
Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors
2017-01-01
Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K+ conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. PMID:28381642
Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors.
Heras, Francisco J H; Anderson, John; Laughlin, Simon B; Niven, Jeremy E
2017-04-01
Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K + conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. © 2017 The Author(s).
76 FR 57982 - Building Energy Codes Cost Analysis
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-19
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2011-BT-BC-0046] Building Energy Codes Cost Analysis Correction In notice document 2011-23236 beginning on page... heading ``Table 1. Cash flow components'' should read ``Table 7. Cash flow components''. [FR Doc. C1-2011...
The Marriage of Residential Energy Codes and Rating Systems: Conflict Resolution or Just Conflict?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Zachary T.; Mendon, Vrushali V.
2014-08-21
After three decades of coexistence at a distance, model residential energy codes and residential energy rating systems have come together in the 2015 International Energy Conservation Code. At the October, 2013, International Code Council’s Public Comment Hearing, a new compliance path based on an Energy Rating Index was added to the IECC. Although not specifically named in the code, RESNET’s HERS rating system is the likely candidate Index for most jurisdictions. While HERS has been a mainstay in various beyond-code programs for many years, its direct incorporation into the most popular model energy code raises questions about the equivalence ofmore » a HERS-based compliance path and the traditional IECC performance compliance path, especially because the two approaches use different efficiency metrics, are governed by different simulation rules, and have different scopes with regard to energy impacting house features. A detailed simulation analysis of more than 15,000 house configurations reveals a very large range of HERS Index values that achieve equivalence with the IECC’s performance path. This paper summarizes the results of that analysis and evaluates those results against the specific Energy Rating Index values required by the 2015 IECC. Based on the home characteristics most likely to result in disparities between HERS-based compliance and performance path compliance, potential impacts on the compliance process, state and local adoption of the new code, energy efficiency in the next generation of homes subject to this new code, and future evolution of model code formats are discussed.« less
77 FR 16022 - Agency Information Collection Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-19
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy Agency Information Collection Extension AGENCY: Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy... Renewable Energy. [FR Doc. 2012-6546 Filed 3-16-12; 8:45 am] BILLING CODE 6450-01-P ...
Compliance Verification Paths for Residential and Commercial Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, David R.; Makela, Eric J.; Fannin, Jerica D.
2011-10-10
This report looks at different ways to verify energy code compliance and to ensure that the energy efficiency goals of an adopted document are achieved. Conformity assessment is the body of work that ensures compliance, including activities that can ensure residential and commercial buildings satisfy energy codes and standards. This report identifies and discusses conformity-assessment activities and provides guidance for conducting assessments.
Statistical physics inspired energy-efficient coded-modulation for optical communications.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2012-04-15
Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wise, B.K.; Hughes, K.R.; Danko, S.L.
1994-07-01
This report was prepared for the US Department of Energy (DOE) Office of Codes and Standards by the Pacific Northwest Laboratory (PNL) through its Building Energy Standards Program (BESP). The purpose of this task was to identify demand-side management (DSM) strategies for new construction that utilities have adopted or developed to promote energy-efficient design and construction. PNL conducted a survey of utilities and used the information gathered to extrapolate lessons learned and to identify evolving trends in utility new-construction DSM programs. The ultimate goal of the task is to identify opportunities where states might work collaboratively with utilities to promotemore » the adoption, implementation, and enforcement of energy-efficient building energy codes.« less
Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.
Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F
2016-01-01
In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.
An international survey of building energy codes and their implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Roshchanka, Volha; Graham, Peter
Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less
Energy Efficiency in India: Challenges and Initiatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajay Mathur
May 13, 2010 EETD Distinguished Lecture: Ajay Mathur is Director General of the Bureau of Energy Efficiency, and a member of the Prime Minister's Council on Climate Change. As Director General of BEE, Dr. Mathur coordinates the national energy efficiency programme, including the standards and labeling programme for equipment and appliances; the energy conservation building code; the industrial energy efficiency programme, and the DSM programmes in the buildings, lighting, and municipal sectors.
Energy Efficiency in India: Challenges and Initiatives
Ajay Mathur
2017-12-09
May 13, 2010 EETD Distinguished Lecture: Ajay Mathur is Director General of the Bureau of Energy Efficiency, and a member of the Prime Minister's Council on Climate Change. As Director General of BEE, Dr. Mathur coordinates the national energy efficiency programme, including the standards and labeling programme for equipment and appliances; the energy conservation building code; the industrial energy efficiency programme, and the DSM programmes in the buildings, lighting, and municipal sectors.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Energy-efficient neural information processing in individual neurons and neuronal networks.
Yu, Lianchun; Yu, Yuguo
2017-11-01
Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This document contains the State Building Energy Codes Status prepared by Pacific Northwest National Laboratory for the U.S. Department of Energy under Contract DE-AC06-76RL01830 and dated September 1996. The U.S. Department of Energy`s Office of Codes and Standards has developed this document to provide an information resource for individuals interested in energy efficiency of buildings and the relevant building energy codes in each state and U.S. territory. This is considered to be an evolving document and will be updated twice a year. In addition, special state updates will be issued as warranted.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... and Renewable Energy, Department of Energy. ACTION: Notice of reopening of public comment period.... James Raba, U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Building... Efficiency and Renewable Energy. [FR Doc. 2013-02755 Filed 2-6-13; 8:45 am] BILLING CODE 6450-01-P ...
An international survey of building energy codes and their implementation
Evans, Meredydd; Roshchanka, Volha; Graham, Peter
2017-08-01
Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy use in buildings. These elements and practices include: comprehensive coverage of buildings by type, age, size, and geographic location; an implementationmore » framework that involves a certified agency to inspect construction at critical stages; and building materials that are independently tested, rated, and labeled. Training and supporting tools are another element of successful code implementation. Some countries have also introduced compliance evaluation studies, which suggested that tightening energy requirements would only be meaningful when also addressing gaps in implementation (Pitt&Sherry, 2014; U.S. DOE, 2016b). Here, this article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less
An international survey of building energy codes and their implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Roshchanka, Volha; Graham, Peter
Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy use in buildings. These elements and practices include: comprehensive coverage of buildings by type, age, size, and geographic location; an implementationmore » framework that involves a certified agency to inspect construction at critical stages; and building materials that are independently tested, rated, and labeled. Training and supporting tools are another element of successful code implementation. Some countries have also introduced compliance evaluation studies, which suggested that tightening energy requirements would only be meaningful when also addressing gaps in implementation (Pitt&Sherry, 2014; U.S. DOE, 2016b). Here, this article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less
P.L. 102-486, "Energy Policy Act" (1992)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-12-13
Amends the Energy Conservation and Production Act to set a deadline by which each State must certify to the Secretary of Energy whether its energy efficiency standards with respect to residential and commercial building codes meet or exceed those of the Council of American Building Officials (CABO) Model Energy Code, 1992, and of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers, respectively.
NASA Astrophysics Data System (ADS)
Kajimoto, Tsuyoshi; Shigyo, Nobuhiro; Sanami, Toshiya; Ishibashi, Kenji; Haight, Robert C.; Fotiades, Nikolaos
2011-02-01
Absolute neutron response functions and detection efficiencies of an NE213 liquid scintillator that was 12.7 cm in diameter and 12.7 cm in thickness were measured for neutron energies between 15 and 600 MeV at the Weapons Neutron Research facility of the Los Alamos Neutron Science Center. The experiment was performed with continuous-energy neutrons on a spallation neutron source by 800-MeV proton incidence. The incident neutron flux was measured using a 238U fission ionization chamber. Measured response functions and detection efficiencies were compared with corresponding calculations using the SCINFUL-QMD code. The calculated and experimental values were in good agreement for data below 70 MeV. However, there were discrepancies in the energy region between 70 and 150 MeV. Thus, the code was partly modified and the revised code provided better agreement with the experimental data.
Energy efficient rateless codes for high speed data transfer over free space optical channels
NASA Astrophysics Data System (ADS)
Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.
2015-03-01
Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.
A Network Coding Based Hybrid ARQ Protocol for Underwater Acoustic Sensor Networks
Wang, Hao; Wang, Shilian; Zhang, Eryang; Zou, Jianbin
2016-01-01
Underwater Acoustic Sensor Networks (UASNs) have attracted increasing interest in recent years due to their extensive commercial and military applications. However, the harsh underwater channel causes many challenges for the design of reliable underwater data transport protocol. In this paper, we propose an energy efficient data transport protocol based on network coding and hybrid automatic repeat request (NCHARQ) to ensure reliability, efficiency and availability in UASNs. Moreover, an adaptive window length estimation algorithm is designed to optimize the throughput and energy consumption tradeoff. The algorithm can adaptively change the code rate and can be insensitive to the environment change. Extensive simulations and analysis show that NCHARQ significantly reduces energy consumption with short end-to-end delay. PMID:27618044
Energy and Energy Cost Savings Analysis of the 2015 IECC for Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jian; Xie, YuLong; Athalye, Rahul A.
As required by statute (42 USC 6833), DOE recently issued a determination that ANSI/ASHRAE/IES Standard 90.1-2013 would achieve greater energy efficiency in buildings subject to the code compared to the 2010 edition of the standard. Pacific Northwest National Laboratory (PNNL) conducted an energy savings analysis for Standard 90.1-2013 in support of its determination . While Standard 90.1 is the model energy standard for commercial and multi-family residential buildings over three floors (42 USC 6833), many states have historically adopted the International Energy Conservation Code (IECC) for both residential and commercial buildings. This report provides an assessment as to whether buildingsmore » constructed to the commercial energy efficiency provisions of the 2015 IECC would save energy and energy costs as compared to the 2012 IECC. PNNL also compared the energy performance of the 2015 IECC with the corresponding Standard 90.1-2013. The goal of this analysis is to help states and local jurisdictions make informed decisions regarding model code adoption.« less
Djordjevic, Ivan B
2011-08-15
In addition to capacity, the future high-speed optical transport networks will also be constrained by energy consumption. In order to solve the capacity and energy constraints simultaneously, in this paper we propose the use of energy-efficient hybrid D-dimensional signaling (D>4) by employing all available degrees of freedom for conveyance of the information over a single carrier including amplitude, phase, polarization and orbital angular momentum (OAM). Given the fact that the OAM eigenstates, associated with the azimuthal phase dependence of the complex electric field, are orthogonal, they can be used as basis functions for multidimensional signaling. Since the information capacity is a linear function of number of dimensions, through D-dimensional signal constellations we can significantly improve the overall optical channel capacity. The energy-efficiency problem is solved, in this paper, by properly designing the D-dimensional signal constellation such that the mutual information is maximized, while taking the energy constraint into account. We demonstrate high-potential of proposed energy-efficient hybrid D-dimensional coded-modulation scheme by Monte Carlo simulations. © 2011 Optical Society of America
Building Energy Efficiency in India: Compliance Evaluation of Energy Conservation Building Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Evans, Meredydd; Delgado, Alison
India is experiencing unprecedented construction boom. The country doubled its floorspace between 2001 and 2005 and is expected to add 35 billion m2 of new buildings by 2050. Buildings account for 35% of total final energy consumption in India today, and building energy use is growing at 8% annually. Studies have shown that carbon policies will have little effect on reducing building energy demand. Chaturvedi et al. predicted that, if there is no specific sectoral policies to curb building energy use, final energy demand of the Indian building sector will grow over five times by the end of this century,more » driven by rapid income and population growth. The growing energy demand in buildings is accompanied by a transition from traditional biomass to commercial fuels, particularly an increase in electricity use. This also leads to a rapid increase in carbon emissions and aggravates power shortage in India. Growth in building energy use poses challenges to the Indian government. To curb energy consumption in buildings, the Indian government issued the Energy Conservation Building Code (ECBC) in 2007, which applies to commercial buildings with a connected load of 100 kW or 120kVA. It is predicted that the implementation of ECBC can help save 25-40% of energy, compared to reference buildings without energy-efficiency measures. However, the impact of ECBC depends on the effectiveness of its enforcement and compliance. Currently, the majority of buildings in India are not ECBC-compliant. The United Nations Development Programme projected that code compliance in India would reach 35% by 2015 and 64% by 2017. Whether the projected targets can be achieved depends on how the code enforcement system is designed and implemented. Although the development of ECBC lies in the hands of the national government – the Bureau of Energy Efficiency under the Ministry of Power, the adoption and implementation of ECBC largely relies on state and local governments. Six years after ECBC’s enactment, only two states and one territory out of 35 Indian states and union territories formally adopted ECBC and six additional states are in the legislative process of approving ECBC. There are several barriers that slow down the process. First, stakeholders, such as architects, developers, and state and local governments, lack awareness of building energy efficiency, and do not have enough capacity and resources to implement ECBC. Second, institution for implementing ECBC is not set up yet; ECBC is not included in local building by-laws or incorporated into the building permit process. Third, there is not a systematic approach to measuring and verifying compliance and energy savings, and thus the market does not have enough confidence in ECBC. Energy codes achieve energy savings only when projects comply with codes, yet only few countries measure compliance consistently and periodic checks often indicate poor compliance in many jurisdictions. China and the U.S. appear to be two countries with comprehensive systems in code enforcement and compliance The United States recently developed methodologies measuring compliance with building energy codes at the state level. China has an annual survey investigating code compliance rate at the design and construction stages in major cities. Like many developing countries, India has only recently begun implementing an energy code and would benefit from international experience on code compliance. In this paper, we examine lessons learned from the U.S. and China on compliance assessment and how India can apply these lessons to develop its own compliance evaluation approach. This paper also provides policy suggestions to national, state, and local governments to improve compliance and speed up ECBC implementation.« less
Binary video codec for data reduction in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias
2013-02-01
Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
Clean Energy in City Codes: A Baseline Analysis of Municipal Codification across the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Jeffrey J.; Aznar, Alexandra; Dane, Alexander
Municipal governments in the United States are well positioned to influence clean energy (energy efficiency and alternative energy) and transportation technology and strategy implementation within their jurisdictions through planning, programs, and codification. Municipal governments are leveraging planning processes and programs to shape their energy futures. There is limited understanding in the literature related to codification, the primary way that municipal governments enact enforceable policies. The authors fill the gap in the literature by documenting the status of municipal codification of clean energy and transportation across the United States. More directly, we leverage online databases of municipal codes to develop nationalmore » and state-specific representative samples of municipal governments by population size. Our analysis finds that municipal governments with the authority to set residential building energy codes within their jurisdictions frequently do so. In some cases, communities set codes higher than their respective state governments. Examination of codes across the nation indicates that municipal governments are employing their code as a policy mechanism to address clean energy and transportation.« less
Energy Efficiency Building Code for Commercial Buildings in Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, John; Greenberg, Steve; Rubinstein, Francis
2000-09-30
1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.
Building Energy Efficiency in Rural China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Yu, Sha; Song, Bo
2014-04-01
Rural buildings in China now account for more than half of China’s total building energy use. Forty percent of the floorspace in China is in rural villages and towns. Most of these buildings are very energy inefficient, and may struggle to meet basic needs. They are cold in the winter, and often experience indoor air pollution from fuel use. The Chinese government plans to adopt a voluntary building energy code, or design standard, for rural homes. The goal is to build on China’s success with codes in urban areas to improve efficiency and comfort in rural homes. The Chinese governmentmore » recognizes rural buildings represent a major opportunity for improving national building energy efficiency. The challenges of rural China are also greater than those of urban areas in many ways because of the limited local capacity and low income levels. The Chinese government wants to expand on new programs to subsidize energy efficiency improvements in rural homes to build capacity for larger-scale improvement. This article summarizes the trends and status of rural building energy use in China. It then provides an overview of the new rural building design standard, and describes options and issues to move forward with implementation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Continuing the tradition established in prior years, this panel encompasses one of the broadest ranges of topics and issues of any panel at the Summer Study. It includes papers addressing all sectors, low-income residential to industrial, and views energy efficiency from many perspectives including programmatic, evaluation, codes, standards, legislation, technical transfer, economic development, and least-cost planning. The papers represent work being performed in most geographic regions of the United States and in the international arena, specifically Thailand, China, Europe, and Scandinavia. This delightful smorgasbord has been organized, based on general content area, into the following eight sessions: (1) new directionsmore » for low-income weatherization; (2) pursuing efficiency through legislation and standards; (3) international perspectives on energy efficiency; (4) technical transfer strategies; (5) government energy policy; (6) commercial codes and standards; (7) innovative programs; and, (8) state-of-the-art review. For these conference proceedings, individual papers are processed separately for the Energy Data Base.« less
Hao, Kun; Jin, Zhigang; Shen, Haifeng; Wang, Ying
2015-05-28
Efficient routing protocols for data packet delivery are crucial to underwater sensor networks (UWSNs). However, communication in UWSNs is a challenging task because of the characteristics of the acoustic channel. Network coding is a promising technique for efficient data packet delivery thanks to the broadcast nature of acoustic channels and the relatively high computation capabilities of the sensor nodes. In this work, we present GPNC, a novel geographic routing protocol for UWSNs that incorporates partial network coding to encode data packets and uses sensor nodes' location information to greedily forward data packets to sink nodes. GPNC can effectively reduce network delays and retransmissions of redundant packets causing additional network energy consumption. Simulation results show that GPNC can significantly improve network throughput and packet delivery ratio, while reducing energy consumption and network latency when compared with other routing protocols.
Nonperturbative methods in HZE ion transport
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Costen, Robert C.; Shinn, Judy L.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport. The code is established to operate on the Langley Research Center nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code is highly efficient and compares well with the perturbation approximations.
NASA Astrophysics Data System (ADS)
Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik
2015-05-01
In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.
Preserving Envelope Efficiency in Performance Based Code Compliance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, Brian A.; Sullivan, Greg P.; Rosenberg, Michael I.
2015-06-20
The City of Seattle 2012 Energy Code (Seattle 2014), one of the most progressive in the country, is under revision for its 2015 edition. Additionally, city personnel participate in the development of the next generation of the Washington State Energy Code and the International Energy Code. Seattle has pledged carbon neutrality by 2050 including buildings, transportation and other sectors. The United States Department of Energy (DOE), through Pacific Northwest National Laboratory (PNNL) provided technical assistance to Seattle in order to understand the implications of one potential direction for its code development, limiting trade-offs of long-lived building envelope components less stringentmore » than the prescriptive code envelope requirements by using better-than-code but shorter-lived lighting and heating, ventilation, and air-conditioning (HVAC) components through the total building performance modeled energy compliance path. Weaker building envelopes can permanently limit building energy performance even as lighting and HVAC components are upgraded over time, because retrofitting the envelope is less likely and more expensive. Weaker building envelopes may also increase the required size, cost and complexity of HVAC systems and may adversely affect occupant comfort. This report presents the results of this technical assistance. The use of modeled energy code compliance to trade-off envelope components with shorter-lived building components is not unique to Seattle and the lessons and possible solutions described in this report have implications for other jurisdictions and energy codes.« less
Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin
2014-10-01
Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.
Recommendations on Implementing the Energy Conservation Building Code in Rajasthan, India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Makela, Eric J.; Evans, Meredydd
India launched the Energy Conservation Building Code (ECBC) in 2007 and Indian Bureau of Energy Efficiency (BEE) recently indicated that it would move to mandatory implementation in the 12th Five-Year Plan. The State of Rajasthan adopted ECBC with minor modifications; the new regulation is known as the Energy Conservation Building Directives – Rajasthan 2011 (ECBD-R). It became mandatory in Rajasthan on September 28, 2011. This report provides recommendations on an ECBD-R enforcement roadmap for the State of Rajasthan.
NASA Astrophysics Data System (ADS)
Huang, Han-Xiong; Ruan, Xi-Chao; Chen, Guo-Chang; Zhou, Zu-Ying; Li, Xia; Bao, Jie; Nie, Yang-Bo; Zhong, Qi-Ping
2009-08-01
The light output function of a varphi50.8 mm × 50.8 mm BC501A scintillation detector was measured in the neutron energy region of 1 to 30 MeV by fitting the pulse height (PH) spectra for neutrons with the simulations from the NRESP code at the edge range. Using the new light output function, the neutron detection efficiency was determined with two Monte-Carlo codes, NEFF and SCINFUL. The calculated efficiency was corrected by comparing the simulated PH spectra with the measured ones. The determined efficiency was verified at the near threshold region and normalized with a Proton-Recoil-Telescope (PRT) at the 8-14 MeV energy region.
Overall Traveling-Wave-Tube Efficiency Improved By Optimized Multistage Depressed Collector Design
NASA Technical Reports Server (NTRS)
Vaden, Karl R.
2002-01-01
Depressed Collector Design The microwave traveling wave tube (TWT) is used widely for space communications and high-power airborne transmitting sources. One of the most important features in designing a TWT is overall efficiency. Yet, overall TWT efficiency is strongly dependent on the efficiency of the electron beam collector, particularly for high values of collector efficiency. For these reasons, the NASA Glenn Research Center developed an optimization algorithm based on simulated annealing to quickly design highly efficient multistage depressed collectors (MDC's). Simulated annealing is a strategy for solving highly nonlinear combinatorial optimization problems. Its major advantage over other methods is its ability to avoid becoming trapped in local minima. Simulated annealing is based on an analogy to statistical thermodynamics, specifically the physical process of annealing: heating a material to a temperature that permits many atomic rearrangements and then cooling it carefully and slowly, until it freezes into a strong, minimum-energy crystalline structure. This minimum energy crystal corresponds to the optimal solution of a mathematical optimization problem. The TWT used as a baseline for optimization was the 32-GHz, 10-W, helical TWT developed for the Cassini mission to Saturn. The method of collector analysis and design used was a 2-1/2-dimensional computational procedure that employs two types of codes, a large signal analysis code and an electron trajectory code. The large signal analysis code produces the spatial, energetic, and temporal distributions of the spent beam entering the MDC. An electron trajectory code uses the resultant data to perform the actual collector analysis. The MDC was optimized for maximum MDC efficiency and minimum final kinetic energy of all collected electrons (to reduce heat transfer). The preceding figure shows the geometric and electrical configuration of an optimized collector with an efficiency of 93.8 percent. The results show the improvement in collector efficiency from 89.7 to 93.8 percent, resulting in an increase of three overall efficiency points. In addition, the time to design a highly efficient MDC was reduced from a month to a few days. All work was done in-house at Glenn for the High Rate Data Delivery Program. Future plans include optimizing the MDC and TWT interaction circuit in tandem to further improve overall TWT efficiency.
Green Building Tools for Tribes
Tribal green building tools and funding information to support tribal building code adoption, healthy building, siting, energy efficiency, renewable energy, water conservation, green building materials, recycling and adaptation and resilience.
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
Network Coded Cooperative Communication in a Real-Time Wireless Hospital Sensor Network.
Prakash, R; Balaji Ganesh, A; Sivabalan, Somu
2017-05-01
The paper presents a network coded cooperative communication (NC-CC) enabled wireless hospital sensor network architecture for monitoring health as well as postural activities of a patient. A wearable device, referred as a smartband is interfaced with pulse rate, body temperature sensors and an accelerometer along with wireless protocol services, such as Bluetooth and Radio-Frequency transceiver and Wi-Fi. The energy efficiency of wearable device is improved by embedding a linear acceleration based transmission duty cycling algorithm (NC-DRDC). The real-time demonstration is carried-out in a hospital environment to evaluate the performance characteristics, such as power spectral density, energy consumption, signal to noise ratio, packet delivery ratio and transmission offset. The resource sharing and energy efficiency features of network coding technique are improved by proposing an algorithm referred as network coding based dynamic retransmit/rebroadcast decision control (LA-TDC). From the experimental results, it is observed that the proposed LA-TDC algorithm reduces network traffic and end-to-end delay by an average of 27.8% and 21.6%, respectively than traditional network coded wireless transmission. The wireless architecture is deployed in a hospital environment and results are then successfully validated.
up zip code Case Studies Weatherization: Improving Home Safety and Reducing Your Energy Bill home energy efficient? Your House is a System Living Off The Sun, Or, No Electricity Bill Kermit was Cottage Energy Blogs 5 Most Effective Ways to Save on Your Energy Bill Updating Guest Bathroom With Energy
10 CFR 431.387 - Hearings and appeals.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Hearings and appeals. 431.387 Section 431.387 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL... notwithstanding the provisions of title 28, United States Code, or Section 502(c) of the Department of Energy...
Energy Savings Analysis of the Proposed NYStretch-Energy Code 2018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Bing; Zhang, Jian; Chen, Yan
This study was conducted by the Pacific Northwest National Laboratory (PNNL) in support of the stretch energy code development led by the New York State Energy Research and Development Authority (NYSERDA). In 2017 NYSERDA developed its 2016 Stretch Code Supplement to the 2016 New York State Energy Conservation Construction Code (hereinafter referred to as “NYStretch-Energy”). NYStretch-Energy is intended as a model energy code for statewide voluntary adoption that anticipates other code advancements culminating in the goal of a statewide Net Zero Energy Code by 2028. Since then, NYSERDA continues to develop the NYStretch-Energy Code 2018 edition. To support the effort,more » PNNL conducted energy simulation analysis to quantify the energy savings of proposed commercial provisions of the NYStretch-Energy Code (2018) in New York. The focus of this project is the 20% improvement over existing commercial model energy codes. A key requirement of the proposed stretch code is that it be ‘adoptable’ as an energy code, meaning that it must align with current code scope and limitations, and primarily impact building components that are currently regulated by local building departments. It is largely limited to prescriptive measures, which are what most building departments and design projects are most familiar with. This report describes a set of energy-efficiency measures (EEMs) that demonstrate 20% energy savings over ANSI/ASHRAE/IES Standard 90.1-2013 (ASHRAE 2013) across a broad range of commercial building types and all three climate zones in New York. In collaboration with New Building Institute, the EEMs were developed from national model codes and standards, high-performance building codes and standards, regional energy codes, and measures being proposed as part of the on-going code development process. PNNL analyzed these measures using whole building energy models for selected prototype commercial buildings and multifamily buildings representing buildings in New York. Section 2 of this report describes the analysis methodology, including the building types and construction area weights update for this analysis, the baseline, and the method to conduct the energy saving analysis. Section 3 provides detailed specifications of the EEMs and bundles. Section 4 summarizes the results of individual EEMs and EEM bundles by building type, energy end-use and climate zone. Appendix A documents detailed descriptions of the selected prototype buildings. Appendix B provides energy end-use breakdown results by building type for both the baseline code and stretch code in all climate zones.« less
BRYNTRN: A baryon transport model
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.
1989-01-01
The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Definitions. 435.2 Section 435.2 Energy DEPARTMENT OF... Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.2 Definitions. For... Loan Mortgage Corporation. ICC means International Code Council. IECC means International Energy...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Definitions. 435.2 Section 435.2 Energy DEPARTMENT OF... Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.2 Definitions. For... Loan Mortgage Corporation. ICC means International Code Council. IECC means International Energy...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Definitions. 435.2 Section 435.2 Energy DEPARTMENT OF... Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.2 Definitions. For... Loan Mortgage Corporation. ICC means International Code Council. IECC means International Energy...
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.
1997-01-01
Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael; Jonlin, Duane; Nadel, Steven
Today’s building energy codes focus on prescriptive requirements for features of buildings that are directly controlled by the design and construction teams and verifiable by municipal inspectors. Although these code requirements have had a significant impact, they fail to influence a large slice of the building energy use pie – including not only miscellaneous plug loads, cooking equipment and commercial/industrial processes, but the maintenance and optimization of the code-mandated systems as well. Currently, code compliance is verified only through the end of construction, and there are no limits or consequences for the actual energy use in an occupied building. Inmore » the future, our suite of energy regulations will likely expand to include building efficiency, energy use or carbon emission budgets over their full life cycle. Intelligent building systems, extensive renewable energy, and a transition from fossil fuel to electric heating systems will likely be required to meet ultra-low-energy targets. This paper lays out the authors’ perspectives on how buildings may evolve over the course of the 21st century and the roles that codes and regulations will play in shaping those buildings of the future.« less
Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport
NASA Technical Reports Server (NTRS)
Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.
2008-01-01
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Evans, Meredydd; Shi, Qing
China will account for about half of the new construction globally in the coming decade. Its floorspace doubled from 1996 to 2011, and Chinese rural buildings alone have as much floorspace as all of U.S. residential buildings. Building energy consumption has also grown, increasing by over 40% since 1990. To curb building energy demand, the Chinese government has launched a series of policies and programs. Combined, this growth in buildings and renovations, along with the policies to promote green buildings, are creating a large market for energy efficiency products and services. This report assesses the impact of China’s policies onmore » building energy efficiency and on the market for energy efficiency in the future. The first chapter of this report introduces the trends in China, drawing on both historical analysis, and detailed modeling of the drivers behind changes in floorspace and building energy demand such as economic and population growth, urbanization, policy. The analysis describes the trends by region, building type and energy service. The second chapter discusses China’s policies to promote green buildings. China began developing building energy codes in the 1980s. Over time, the central government has increased the stringency of the code requirements and the extent of enforcement. The codes are mandatory in all new buildings and major renovations in China’s cities, and they have been a driving force behind the expansion of China’s markets for insulation, efficient windows, and other green building materials. China also has several other important policies to encourage efficient buildings, including the Three-Star Rating System (somewhat akin to LEED), financial incentives tied to efficiency, appliance standards, a phasing out of incandescent bulbs and promotion of efficient lighting, and several policies to encourage retrofits in existing buildings. In the third chapter, we take “deep dives” into the trends affecting key building components. This chapter examines insulation in walls and roofs; efficient windows and doors; heating, air conditioning and controls; and lighting. These markets have seen significant growth because of the strength of the construction sector but also the specific policies that require and promote efficient building components. At the same time, as requirements have become more stringent, there has been fierce competition, and quality has at time suffered, which in turn has created additional challenges. Next we examine existing buildings in chapter four. China has many Soviet-style, inefficient buildings built before stringent requirements for efficiency were more widely enforced. As a result, there are several specific market opportunities related to retrofits. These fall into two or three categories. First, China now has a code for retrofitting residential buildings in the north. Local governments have targets of the number of buildings they must retrofit each year, and they help finance the changes. The requirements focus on insulation, windows, and heat distribution. Second, the Chinese government recently decided to increase the scale of its retrofits of government and state-owned buildings. It hopes to achieve large scale changes through energy service contracts, which creates an opportunity for energy service companies. Third, there is also a small but growing trend to apply energy service contracts to large commercial and residential buildings. This report assesses the impacts of China’s policies on building energy efficiency. By examining the existing literature and interviewing stakeholders from the public, academic, and private sectors, the report seeks to offer an in-depth insights of the opportunities and barriers for major market segments related to building energy efficiency. The report also discusses trends in building energy use, policies promoting building energy efficiency, and energy performance contracting for public building retrofits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Qing; Yu, Sha; Evans, Meredydd
2016-05-01
India adopted the Energy Conservation Building Code (ECBC) in 2007. Rajasthan is the first state to make ECBC mandatory at the state level. In collaboration with Malaviya National Institute of Technology (MNIT) Jaipur, Pacific Northwest National Laboratory (PNNL) has been working with Rajasthan to facilitate the implementation of ECBC. This report summarizes milestones made in Rajasthan and PNNL's contribution in institutional set-ups, capacity building, compliance enforcement and pilot building construction.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Definitions. 435.2 Section 435.2 Energy DEPARTMENT OF... Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.2 Definitions. For... International Energy Conservation Code, 2004 Supplement Edition, January 2005 (incorporated by reference, see...
Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.
Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay
2013-07-01
In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.
The 2.5 bit/detected photon demonstration program: Phase 2 and 3 experimental results
NASA Technical Reports Server (NTRS)
Katz, J.
1982-01-01
The experimental program for laboratory demonstration of and energy efficient optical communication channel operating at a rate of 2.5 bits/detected photon is described. Results of the uncoded PPM channel performance are presented. It is indicated that the throughput efficiency can be achieved not only with a Reed-Solomon code as originally predicted, but with a less complex code as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Mark
This report summarizes activity conducted by the Institute for Market Transformation and a team of American and Chinese partners in development of a new building energy-efficiency code for the transitional climate zone in the People's Republic of China.
An Energy Model of Place Cell Network in Three Dimensional Space.
Wang, Yihong; Xu, Xuying; Wang, Rubin
2018-01-01
Place cells are important elements in the spatial representation system of the brain. A considerable amount of experimental data and classical models are achieved in this area. However, an important question has not been addressed, which is how the three dimensional space is represented by the place cells. This question is preliminarily surveyed by energy coding method in this research. Energy coding method argues that neural information can be expressed by neural energy and it is convenient to model and compute for neural systems due to the global and linearly addable properties of neural energy. Nevertheless, the models of functional neural networks based on energy coding method have not been established. In this work, we construct a place cell network model to represent three dimensional space on an energy level. Then we define the place field and place field center and test the locating performance in three dimensional space. The results imply that the model successfully simulates the basic properties of place cells. The individual place cell obtains unique spatial selectivity. The place fields in three dimensional space vary in size and energy consumption. Furthermore, the locating error is limited to a certain level and the simulated place field agrees to the experimental results. In conclusion, this is an effective model to represent three dimensional space by energy method. The research verifies the energy efficiency principle of the brain during the neural coding for three dimensional spatial information. It is the first step to complete the three dimensional spatial representing system of the brain, and helps us further understand how the energy efficiency principle directs the locating, navigating, and path planning function of the brain.
Relay selection in energy harvesting cooperative networks with rateless codes
NASA Astrophysics Data System (ADS)
Zhu, Kaiyan; Wang, Fei
2018-04-01
This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-20
... and Building Codes, U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy... posted at http://www1.eere.energy.gov/buildings/appliance_standards/asrac.html : Update on Commercial... Energy, Building Technologies Program, Mailstop EE-2J, 1000 Independence Avenue SW., Washington, DC 20585...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellfeld, Daniel; Barton, Paul; Gunter, Donald
Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less
A Spherical Active Coded Aperture for 4π Gamma-ray Imaging
Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...
2017-09-22
Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less
Kneifel, Joshua; O'Rear, Eric; Webb, David; O'Fallon, Cheyney
2018-02-01
To conduct a more complete analysis of low-energy and net-zero energy buildings that considers both the operating and embodied energy/emissions, members of the building community look to life-cycle assessment (LCA) methods. This paper examines differences in the relative impacts of cost-optimal energy efficiency measure combinations depicting residential buildings up to and beyond net-zero energy consumption on operating and embodied flows using data from the Building Industry Reporting and Design for Sustainability (BIRDS) Low-Energy Residential Database. Results indicate that net-zero performance leads to a large increase in embodied flows (over 40%) that offsets some of the reductions in operational flows, but overall life-cycle flows are still reduced by over 60% relative to the state energy code. Overall, building designs beyond net-zero performance can partially offset embodied flows with negative operational flows by replacing traditional electricity generation with solar production, but would require an additional 8.34 kW (18.54 kW in total) of due south facing solar PV to reach net-zero total life-cycle flows. Such a system would meet over 239% of operational consumption of the most energy efficient design considered in this study and over 116% of a state code-compliant building design in its initial year of operation.
76 FR 64924 - Updating State Residential Building Energy Efficiency Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-19
...) considers high-rise (greater than three stories) multifamily residential buildings and hotel, motel, and..., duplexes, townhouses, row houses, and low-rise multifamily buildings (not greater than three stories) such... pumps as compared to other electric heating technologies, this code change is expected to increase the...
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1991-01-01
Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-13
.... EERE-2013-BT-STD-0030] RIN 1904-AD01 Energy Conservation Program for Certain Commercial and Industrial... efficiency of certain industrial equipment to conserve the energy resources of the Nation. DATES: DOE will... codification in the U.S. Code, establishes the ``Energy Conservation Program for Certain Industrial Equipment...
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
NASA Astrophysics Data System (ADS)
Han, B. X.; Welton, R. F.; Stockli, M. P.; Luciano, N. P.; Carmichael, J. R.
2008-02-01
Beam simulation codes PBGUNS, SIMION, and LORENTZ-3D were evaluated by modeling the well-diagnosed SNS base line ion source and low energy beam transport (LEBT) system. Then, an investigation was conducted using these codes to assist our ion source and LEBT development effort which is directed at meeting the SNS operational and also the power-upgrade project goals. A high-efficiency H- extraction system as well as magnetic and electrostatic LEBT configurations capable of transporting up to 100mA is studied using these simulation tools.
Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks
ERIC Educational Resources Information Center
Yu, Chao
2013-01-01
In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…
10 CFR 431.323 - Materials incorporated by reference.
Code of Federal Regulations, 2011 CFR
2011-01-01
... of Energy, Office of Energy Efficiency and Renewable Energy, Building Technologies Program, 6th Floor... National Standard for electric lamps: Single-Ended Metal Halide Lamps, approved May 5, 2004, IBR approved... (“NFPA 70”), National Electrical Code 2002 Edition, IBR approved for § 431.326; (2) [Reserved] (e) UL...
10 CFR 431.323 - Materials incorporated by reference.
Code of Federal Regulations, 2013 CFR
2013-01-01
... of Energy, Office of Energy Efficiency and Renewable Energy, Building Technologies Program, 6th Floor... National Standard for electric lamps: Single-Ended Metal Halide Lamps, approved May 5, 2004, IBR approved... (“NFPA 70”), National Electrical Code 2002 Edition, IBR approved for § 431.326; (2) [Reserved] (e) UL...
Reduced discretization error in HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu
2013-02-01
The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less
Validation of a multi-layer Green's function code for ion beam transport
NASA Astrophysics Data System (ADS)
Walker, Steven; Tweed, John; Tripathi, Ram; Badavi, Francis F.; Miller, Jack; Zeitlin, Cary; Heilbronn, Lawrence
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiations is needed. In consequence, a new version of the HZETRN code capable of simulating high charge and energy (HZE) ions with either laboratory or space boundary conditions is currently under development. The new code, GRNTRN, is based on a Green's function approach to the solution of Boltzmann's transport equation and like its predecessor is deterministic in nature. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and down shift. Code validation in the laboratory environment is addressed by showing that GRNTRN accurately predicts energy loss spectra as measured by solid-state detectors in ion beam experiments with multi-layer targets. In order to validate the code with space boundary conditions, measured particle fluences are propagated through several thicknesses of shielding using both GRNTRN and the current version of HZETRN. The excellent agreement obtained indicates that GRNTRN accurately models the propagation of HZE ions in the space environment as well as in laboratory settings and also provides verification of the HZETRN propagator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coddington, M.; Kroposki, B.; Basso, T.
Effectively interconnecting high-level penetration of photovoltaic (PV) systems requires careful technical attention to ensuring compatibility with electric power systems. Standards, codes, and implementation have been cited as major impediments to widespread use of PV within electric power systems. On May 20, 2010, in Denver, Colorado, the National Renewable Energy Laboratory, in conjunction with the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), held a workshop to examine the key technical issues and barriers associated with high PV penetration levels with an emphasis on codes and standards. This workshop included building upon results of the Highmore » Penetration of Photovoltaic (PV) Systems into the Distribution Grid workshop held in Ontario California on February 24-25, 2009, and upon the stimulating presentations of the diverse stakeholder presentations.« less
NASA Astrophysics Data System (ADS)
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stovall, Therese K; Biswas, Kaushik; Song, Bo
In November of 2009, the presidents of China and the U.S. announced the establishment of the Clean Energy Research Center (CERC). This broad research effort is co-funded by both countries and involves a large number of research centers and universities in both countries. One part of this program is focused on improving the energy efficiency of buildings. One portion of the CERC-BEE was focused on building insulation systems. The research objective of this effort was to Identify and investigate candidate high performance fire resistant building insulation technologies that meet the goal of building code compliance for exterior wall applications inmore » green buildings in multiple climate zones. A Joint Work Plan was established between researchers at the China Academy of Building Research and Oak Ridge National Laboratory. Efforts in the first year under this plan focused on information gathering. The objective of this research program is to reduce building energy use in China via improved building insulation technology. In cold regions in China, residents often use inefficient heating systems to provide a minimal comfort level within inefficient buildings. In warmer regions, air conditioning has not been commonly used. As living standards rise, energy consumption in these regions will increase dramatically unless significant improvements are made in building energy performance. Previous efforts that defined the current state of the built environment in China and in the U.S. will be used in this research. In countries around the world, building improvements have typically followed the implementation of more stringent building codes. There have been several changes in building codes in both the U.S. and China within the last few years. New U.S. building codes have increased the amount of wall insulation required in new buildings. New government statements from multiple agencies in China have recently changed the requirements for buildings in terms of energy efficiency and fire safety. A related issue is the degree to which new standards are adopted and enforced. In the U.S., standards are developed using a consensus process, and local government agencies are free to implement these standards or to ignore them. For example, some U.S. states are still using 2003 versions of the building efficiency standards. There is also a great variation in the degree to which the locally adopted standards are enforced in different U.S. cities and states. With a more central process in China, these issues are different, but possible impacts of variable enforcement efficacy may also exist. Therefore, current building codes in China will be compared to the current state of building fire-safety and energy-efficiency codes in the U.S. and areas for possible improvements in both countries will be explored. In particular, the focus of the applications in China will be on green buildings. The terminology of 'green buildings' has different meanings to different audiences. The U.S. research is interested in both new, green buildings, and on retrofitting existing inefficient buildings. An initial effort will be made to clarify the scope of the pertinent wall insulation systems for these applications.« less
Study of solid-conversion gaseous detector based on GEM for high energy X-ray industrial CT.
Zhou, Rifeng; Zhou, Yaling
2014-01-01
The general gaseous ionization detectors are not suitable for high energy X-ray industrial computed tomography (HEICT) because of their inherent limitations, especially low detective efficiency and large volume. The goal of this study was to investigate a new type of gaseous detector to solve these problems. The novel detector was made by a metal foil as X-ray convertor to improve the conversion efficiency, and the Gas Electron Multiplier (hereinafter "GEM") was used as electron amplifier to lessen its volume. The detective mechanism and signal formation of the detector was discussed in detail. The conversion efficiency was calculated by using EGSnrc Monte Carlo code, and the transport course of photon and secondary electron avalanche in the detector was simulated with the Maxwell and Garfield codes. The result indicated that this detector has higher conversion efficiency as well as less volume. Theoretically this kind of detector could be a perfect candidate for replacing the conventional detector in HEICT.
Alternative Fuels Data Center: Widgets
Efficiency and Renewable Energy Get Widget Code à Widget Code Select All Close Vehicle Cost Calculator Share a tool to calculate annual fuel cost and greenhouse gas emissions for alternative fuel and advanced technology vehicles. Vehicle Cost Calculator Choose a vehicle to compare fuel cost and emissions with a
Current Trends in Commercial Energy Codes
ERIC Educational Resources Information Center
Sebesta, James J.; Diemer, Robert; Ierardi, James
2013-01-01
Buildings consume approximately 40 percent of the energy used in the U.S., and efficiency is widely recognized to be the most effective means for containing demand and reducing use. Institutions of higher education make up a significant proportion of building area and annual energy and facility-related costs in the United States. The national…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés
The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of themore » intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.« less
Characteristic evaluation of a Lithium-6 loaded neutron coincidence spectrometer.
Hayashi, M; Kaku, D; Watanabe, Y; Sagara, K
2007-01-01
Characteristics of a (6)Li-loaded neutron coincidence spectrometer were investigated from both measurements and Monte Carlo simulations. The spectrometer consists of three (6)Li-glass scintillators embedded in a liquid organic scintillator BC-501A, which can detect selectively neutrons that deposit the total energy in the BC-501A using a coincidence signal generated from the capture event of thermalised neutrons in the (6)Li-glass scintillators. The relative efficiency and the energy response were measured using 4.7, 7.2 and 9.0 MeV monoenergetic neutrons. The measured ones were compared with the Monte Carlo calculations performed by combining the neutron transport code PHITS and the scintillator response calculation code SCINFUL. The experimental light output spectra were in good agreement with the calculated ones in shape. The energy dependence of the detection efficiency was reproduced by the calculation. The response matrices for 1-10 MeV neutrons were finally obtained.
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
1984-01-01
The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.
A soft X-ray source based on a low divergence, high repetition rate ultraviolet laser
NASA Astrophysics Data System (ADS)
Crawford, E. A.; Hoffman, A. L.; Milroy, R. D.; Quimby, D. C.; Albrecht, G. F.
The CORK code is utilized to evaluate the applicability of low divergence ultraviolet lasers for efficient production of soft X-rays. The use of the axial hydrodynamic code wih one ozone radial expansion to estimate radial motion and laser energy is examined. The calculation of ionization levels of the plasma and radiation rates by employing the atomic physics and radiation model included in the CORK code is described. Computations using the hydrodynamic code to determine the effect of laser intensity, spot size, and wavelength on plasma electron temperature are provided. The X-ray conversion efficiencies of the lasers are analyzed. It is observed that for a 1 GW laser power the X-ray conversion efficiency is a function of spot size, only weakly dependent on pulse length for time scales exceeding 100 psec, and better conversion efficiencies are obtained at shorter wavelengths. It is concluded that these small lasers focused to 30 micron spot sizes and 10 to the 14th W/sq cm intensities are useful sources of 1-2 keV radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, R. W.; Petrov, Yu. V.
2013-12-03
Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less
The efficiency of convective energy transport in the sun
NASA Technical Reports Server (NTRS)
Schatten, Kenneth H.
1988-01-01
Mixing length theory (MLT) utilizes adiabatic expansion (as well as radiative transport) to diminish the energy content of rising convective elements. Thus in MLT, the rising elements lose their energy to the environment most efficiently and consequently transport heat with the least efficiency. On the other hand Malkus proposed that convection would maximize the efficiency of energy transport. A new stellar envelope code is developed to first examine this other extreme, wherein rising turbulent elements transport heat with the greatest possible efficiency. This other extreme model differs from MLT by providing a small reduction in the upper convection zone temperatures but greatly diminished turbulent velocities below the top few hundred kilometers. Using the findings of deep atmospheric models with the Navier-Stokes equation allows the calculation of an intermediate solar envelope model. Consideration is given to solar observations, including recent helioseismology, to examine the position of the solar envelope compared with the envelope models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Lutes, Robert G.; Katipamula, Srinivas
This document is a users guide for OpenEIS, a software code designed to provide standard methods for authoring, sharing, testing, using and improving algorithms for operational building energy efficiency.
Warm Body Temperature Facilitates Energy Efficient Cortical Action Potentials
Yu, Yuguo; Hill, Adam P.; McCormick, David A.
2012-01-01
The energy efficiency of neural signal transmission is important not only as a limiting factor in brain architecture, but it also influences the interpretation of functional brain imaging signals. Action potential generation in mammalian, versus invertebrate, axons is remarkably energy efficient. Here we demonstrate that this increase in energy efficiency is due largely to a warmer body temperature. Increases in temperature result in an exponential increase in energy efficiency for single action potentials by increasing the rate of Na+ channel inactivation, resulting in a marked reduction in overlap of the inward Na+, and outward K+, currents and a shortening of action potential duration. This increase in single spike efficiency is, however, counterbalanced by a temperature-dependent decrease in the amplitude and duration of the spike afterhyperpolarization, resulting in a nonlinear increase in the spike firing rate, particularly at temperatures above approximately 35°C. Interestingly, the total energy cost, as measured by the multiplication of total Na+ entry per spike and average firing rate in response to a constant input, reaches a global minimum between 37–42°C. Our results indicate that increases in temperature result in an unexpected increase in energy efficiency, especially near normal body temperature, thus allowing the brain to utilize an energy efficient neural code. PMID:22511855
Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport
NASA Technical Reports Server (NTRS)
Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.
2008-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Design and Evaluation of Energy Efficient Modular Classroom Structures.
ERIC Educational Resources Information Center
Brown, G. Z.; And Others
This paper describes a study that developed innovations that would enable modular builders to improve the energy performance of their classrooms without increasing their first cost. The Modern Building Systems' classroom building conforms to the stringent Oregon and Washington energy codes, and, at $18 per square foot, it is at the low end of the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Low-complex energy-aware image communication in visual sensor networks
NASA Astrophysics Data System (ADS)
Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran
2013-10-01
A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Reshma; Ravache, Baptiste; Sartor, Dale
India launched the Energy Conservation Building Code (ECBC) in 2007, and a revised version in 2017 as ambitious first steps towards promoting energy efficiency in the building sector. Pioneering early adopters—building owners, A&E firms, and energy consultants—have taken the lead to design customized solutions for their energy-efficient buildings. This Guide offers a synthesizing framework, critical lessons, and guidance to meet and exceed ECBC. Its whole-building lifecycle assurance framework provides a user-friendly methodology to achieve high performance in terms of energy, environmental, and societal impact. Class A offices are selected as a target typology, being a high-growth sector, with significant opportunitiesmore » for energy savings. The practices may be extrapolated to other commercial building sectors, as well as extended to other regions with similar cultural, climatic, construction, and developmental contexts« less
Energy-efficient sensing in wireless sensor networks using compressed sensing.
Razzaque, Mohammad Abdur; Dobson, Simon
2014-02-12
Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.
Optimized nonorthogonal transforms for image compression.
Guleryuz, O G; Orchard, M T
1997-01-01
The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things
Akan, Ozgur B.
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.
Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, Sara; Rothgeb, Stacey; Polly, Ben
The U.S. Department of Energy (DOE) Building America Program enables the transformation of the U.S. housing industry to achieve energy savings through energy-efficient, high-performance homes with improved durability, comfort, and health for occupants. Building America bridges the gap between the development of emerging technologies and the adoption of codes and standards by engaging industry partners in applied research, development, and demonstration of high-performance solutions.
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-07-09
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-01-01
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Fujibuchi, T
Purpose: Secondary-neutrons having harmful influences to a human body are generated by photonuclear reaction on high-energy photon therapy. Their characteristics are not known in detail since the calculation to evaluate them takes very long time. PHITS(Particle and Heavy Ion Transport code System) Monte Carlo code since versions 2.80 has the new parameter “pnimul” raising the probability of occurring photonuclear reaction forcibly to make the efficiency of calculation. We investigated the optimum value of “pnimul” on high-energy photon therapy. Methods: The geometry of accelerator head based on the specification of a Varian Clinac 21EX was used for PHITS ver. 2.80. Themore » phantom (30 cm * 30 cm * 30 cm) filled the composition defined by ICRU(International Commission on Radiation Units) was placed at source-surface distance 100 cm. We calculated the neutron energy spectra in the surface of ICRU phantom with “pnimal” setting 1, 10, 100, 1000, 10000 and compared the total calculation time and the behavior of photon using PDD(Percentage Depth Dose) and OCR(Off-Center Ratio). Next, the cutoff energy of photon, electron and positron were investigated for the calculation efficiency with 4, 5, 6 and 7 MeV. Results: The calculation total time until the errors of neutron fluence become within 1% decreased as increasing “pnimul”. PDD and OCR showed no differences by the parameter. The calculation time setting the cutoff energy like 4, 5, 6 and 7 MeV decreased as increasing the cutoff energy. However, the errors of photon become within 1% did not decrease by the cutoff energy. Conclusion: The optimum values of “pnimul” and the cutoff energy were investigated on high-energy photon therapy. It is suggest that using the optimum “pnimul” makes the calculation efficiency. The study of the cutoff energy need more investigation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Heather E.; Antonopoulos, Chrissi A.; Solana, Amy E.
As the model energy codes are improved to reach efficiency levels 50 percent greater than current codes, use of on-site renewable energy generation is likely to become a code requirement. This requirement will be needed because traditional mechanisms for code improvement, including envelope, mechanical and lighting, have been pressed to the end of reasonable limits. Research has been conducted to determine the mechanism for implementing this requirement (Kaufman 2011). Kaufmann et al. determined that the most appropriate way to structure an on-site renewable requirement for commercial buildings is to define the requirement in terms of an installed power density permore » unit of roof area. This provides a mechanism that is suitable for the installation of photovoltaic (PV) systems on future buildings to offset electricity and reduce the total building energy load. Kaufmann et al. suggested that an appropriate maximum for the requirement in the commercial sector would be 4 W/ft{sup 2} of roof area or 0.5 W/ft{sup 2} of conditioned floor area. As with all code requirements, there must be an alternative compliance path for buildings that may not reasonably meet the renewables requirement. This might include conditions like shading (which makes rooftop PV arrays less effective), unusual architecture, undesirable roof pitch, unsuitable building orientation, or other issues. In the short term, alternative compliance paths including high performance mechanical equipment, dramatic envelope changes, or controls changes may be feasible. These options may be less expensive than many renewable systems, which will require careful balance of energy measures when setting the code requirement levels. As the stringency of the code continues to increase however, efficiency trade-offs will be maximized, requiring alternative compliance options to be focused solely on renewable electricity trade-offs or equivalent programs. One alternate compliance path includes purchase of Renewable Energy Credits (RECs). Each REC represents a specified amount of renewable electricity production and provides an offset of environmental externalities associated with non-renewable electricity production. The purpose of this paper is to explore the possible issues with RECs and comparable alternative compliance options. Existing codes have been examined to determine energy equivalence between the energy generation requirement and the RECs alternative over the life of the building. The price equivalence of the requirement and the alternative are determined to consider the economic drivers for a market decision. This research includes case studies that review how the few existing codes have incorporated RECs and some of the issues inherent with REC markets. Section 1 of the report reviews compliance options including RECs, green energy purchase programs, shared solar agreements and leases, and other options. Section 2 provides detailed case studies on codes that include RECs and community based alternative compliance methods. The methods the existing code requirements structure alternative compliance options like RECs are the focus of the case studies. Section 3 explores the possible structure of the renewable energy generation requirement in the context of energy and price equivalence. The price of RECs have shown high variation by market and over time which makes it critical to for code language to be updated frequently for a renewable energy generation requirement or the requirement will not remain price-equivalent over time. Section 4 of the report provides a maximum case estimate for impact to the PV market and the REC market based on the Kaufmann et al. proposed requirement levels. If all new buildings in the commercial sector complied with the requirement to install rooftop PV arrays, nearly 4,700 MW of solar would be installed in 2012, a major increase from EIA estimates of 640 MW of solar generation capacity installed in 2009. The residential sector could contribute roughly an additional 2,300 MW based on the same code requirement levels of 4 W/ft{sup 2} of roof area. Section 5 of the report provides a basic framework for draft code language recommendations based on the analysis of the alternative compliance levels.« less
An Improved Neutron Transport Algorithm for HZETRN2006
NASA Astrophysics Data System (ADS)
Slaba, Tony
NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.
Coupled Neutron Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.
2009-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
NASA Astrophysics Data System (ADS)
Yu, Lianchun; Liu, Liwei
2014-03-01
The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.
Yu, Lianchun; Liu, Liwei
2014-03-01
The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.
10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the life-cycle cost analysis method in part 436, subpart A, of title 10 of the Code of Federal... 10 Energy 3 2011-01-01 2011-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbride, Theresa L.
2009-03-30
This is a case study of the Lakeland, FLorida, Habitat for Humanity affiliate, which has partnered with DOE's Building America program to homes that achieve energy savings of 30% or more over the Building America baseline home (a home built to the 1993 Model Energy Code). The article includes a description of the energy-efficiency features used. The Lakeland affiliate built several of its homes with ducts in conditioned space, which minimizes heat losses and gains. They also used high-efficiency SEER 14 air conditioners; radiant barriers in the roof to keep attics cooler; above-code high-performance dual-pane vinyl-framed low-emissivity windows; a passivemore » fresh air duct to the air handler; and duct blaster and blower door testing of every home to ensure the home's air tightness. This case study was also prepared as a flier titled "High Performance Builder Spotlight: Lakeland Habitat for Humanity, Lakeland, Florida,: which was cleared as PNNL-SA-59068 and distributed at the International Builders’ Show Feb 13-16, 2008, in Orlando, Florida.« less
Establishing a commercial building energy data framework for India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iyer, Maithili; Kumar, Satish; Mathew, Sangeeta
Buildings account for over 40% of the world’s energy consumption and are therefore a key contributor to a country’s energy as well as carbon budget. Understanding how buildings use energy is critical to understanding how related policies may impact energy use. Data enables decision making, and good quality data arms consumers with the tools to compare their energy performance to their peers, allowing them to differentiate their buildings in the real estate market on the basis of their energy footprint. Good quality data are also essential for policy makers to prioritize their energy saving strategies and track implementation. The Unitedmore » States’ Commercial Building Energy Consumption Survey (CBECS) is an example of a successful data framework that is highly useful for governmental and nongovernmental initiatives related to benchmarking energy forecasting, rating systems and metrics, and more. The Bureau of Energy Efficiency (BEE) in India developed the Energy Conservation Building Code (ECBC) and launched the Star Labeling program for a few energy-intensive building segments as a significant first step. However, a data driven policy framework for systematically targeting energy efficiency in both new construction and existing buildings has largely been missing. There is no quantifiable mechanism currently in place to track the impact of code adoption through regular reporting/survey of energy consumption in the commercial building stock. In this paper we present findings from our study that explored use cases and approaches for establishing a commercial buildings data framework for India.« less
Improvements in the MGA Code Provide Flexibility and Better Error Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhter, W D; Kerr, J
2005-05-26
The Multi-Group Analysis (MGA) code is widely used to determine nondestructively the relative isotopic abundances of plutonium by gamma-ray spectrometry. MGA users have expressed concern about the lack of flexibility and transparency in the code. Users often have to ask the code developers for modifications to the code to accommodate new measurement situations, such as additional peaks being present in the plutonium spectrum or expected peaks being absent. We are testing several new improvements to a prototype, general gamma-ray isotopic analysis tool with the intent of either revising or replacing the MGA code. These improvements will give the user themore » ability to modify, add, or delete the gamma- and x-ray energies and branching intensities used by the code in determining a more precise gain and in the determination of the relative detection efficiency. We have also fully integrated the determination of the relative isotopic abundances with the determination of the relative detection efficiency to provide a more accurate determination of the errors in the relative isotopic abundances. We provide details in this paper on these improvements and a comparison of results obtained with current versions of the MGA code.« less
Progress Towards Highly Efficient Windows for Zero—Energy Buildings
NASA Astrophysics Data System (ADS)
Selkowitz, Stephen
2008-09-01
Energy efficient windows could save 4 quads/year, with an additional 1 quad/year gain from daylighting in commercial buildings. This corresponds to 13% of energy used by US buildings and 5% of all energy used by the US. The technical potential is thus very large and the economic potential is slowly becoming a reality. This paper describes the progress in energy efficient windows that employ low-emissivity glazing, electrochromic switchable coatings and other novel materials. Dynamic systems are being developed that use sensors and controls to modulate daylighting and shading contributions in response to occupancy, comfort and energy needs. Improving the energy performance of windows involves physics in a variety of application: optics, heat transfer, materials science and applied engineering. Technical solutions must also be compatible with national policy, codes and standards, economics, business practice and investment, real and perceived risks, comfort, health, safety, productivity, amenities, and occupant preference and values. The challenge is to optimize energy performance by understanding and reinforcing the synergetic coupling between these many issues.
40% Whole-House Energy Savings in the Hot-Humid Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.
40% Whole-House Energy Savings in the Mixed-Humid Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, T. L.; Hefty, M. G.
2011-09-01
This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.
Research Support Facility (RSF): Leadership in Building Performance (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This brochure/poster provides information on the features of the Research Support Facility including a detailed illustration of the facility with call outs of energy efficiency and renewable energy technologies. Imagine an office building so energy efficient that its occupants consume only the amount of energy generated by renewable power on the building site. The building, the Research Support Facility (RSF) occupied by the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) employees, uses 50% less energy than if it were built to current commercial code and achieves the U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED{reg_sign})more » Platinum rating. With 19% of the primary energy in the U.S. consumed by commercial buildings, the RSF is changing the way commercial office buildings are designed and built.« less
Numerical Modeling and Testing of an Inductively-Driven and High-Energy Pulsed Plasma Thrusters
NASA Technical Reports Server (NTRS)
Parma, Brian
2004-01-01
Pulsed Plasma Thrusters (PPTs) are advanced electric space propulsion devices that are characterized by simplicity and robustness. They suffer, however, from low thrust efficiencies. This summer, two approaches to improve the thrust efficiency of PPTs will be investigated through both numerical modeling and experimental testing. The first approach, an inductively-driven PPT, uses a double-ignition circuit to fire two PPTs in succession. This effectively changes the PPTs configuration from an LRC circuit to an LR circuit. The LR circuit is expected to provide better impedance matching and improving the efficiency of the energy transfer to the plasma. An added benefit of the LR circuit is an exponential decay of the current, whereas a traditional PPT s under damped LRC circuit experiences the characteristic "ringing" of its current. The exponential decay may provide improved lifetime and sustained electromagnetic acceleration. The second approach, a high-energy PPT, is a traditional PPT with a variable size capacitor bank. This PPT will be simulated and tested at energy levels between 100 and 450 joules in order to investigate the relationship between efficiency and energy level. Arbitrary Coordinate Hydromagnetic (MACH2) code is used. The MACH2 code, designed by the Center for Plasma Theory and Computation at the Air Force Research Laboratory, has been used to gain insight into a variety of plasma problems, including electric plasma thrusters. The goals for this summer include numerical predictions of performance for both the inductively-driven PPT and high-energy PFT, experimental validation of the numerical models, and numerical optimization of the designs. These goals will be met through numerical and experimental investigation of the PPTs current waveforms, mass loss (or ablation), and impulse bit characteristics.
An efficient parallel algorithm for the calculation of unrestricted canonical MP2 energies.
Baker, Jon; Wolinski, Krzysztof
2011-11-30
We present details of our efficient implementation of full accuracy unrestricted open-shell second-order canonical Møller-Plesset (MP2) energies, both serial and parallel. The algorithm is based on our previous restricted closed-shell MP2 code using the Saebo-Almlöf direct integral transformation. Depending on system details, UMP2 energies take from less than 1.5 to about 3.0 times as long as a closed-shell RMP2 energy on a similar system using the same algorithm. Several examples are given including timings for some large stable radicals with 90+ atoms and over 3600 basis functions. Copyright © 2011 Wiley Periodicals, Inc.
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Hybrid model for simulation of plasma jet injection in tokamak
NASA Astrophysics Data System (ADS)
Galkin, Sergei A.; Bogatu, I. N.
2016-10-01
Hybrid kinetic model of plasma treats the ions as kinetic particles and the electrons as charge neutralizing massless fluid. The model is essentially applicable when most of the energy is concentrated in the ions rather than in the electrons, i.e. it is well suited for the high-density hyper-velocity C60 plasma jet. The hybrid model separates the slower ion time scale from the faster electron time scale, which becomes disregardable. That is why hybrid codes consistently outperform the traditional PIC codes in computational efficiency, still resolving kinetic ions effects. We discuss 2D hybrid model and code with exact energy conservation numerical algorithm and present some results of its application to simulation of C60 plasma jet penetration through tokamak-like magnetic barrier. We also examine the 3D model/code extension and its possible applications to tokamak and ionospheric plasmas. The work is supported in part by US DOE DE-SC0015776 Grant.
Boltzmann Transport Code Update: Parallelization and Integrated Design Updates
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Nealy, J. E.; DeAngelis, G.; Feldman, G. A.; Chokshi, S.
2003-01-01
The on going efforts at developing a web site for radiation analysis is expected to result in an increased usage of the High Charge and Energy Transport Code HZETRN. It would be nice to be able to do the requested calculations quickly and efficiently. Therefore the question arose, "Could the implementation of parallel processing speed up the calculations required?" To answer this question two modifications of the HZETRN computer code were created. The first modification selected the shield material of Al(2219) , then polyethylene and then Al(2219). The modified Fortran code was labeled 1SSTRN.F. The second modification considered the shield material of CO2 and Martian regolith. This modified Fortran code was labeled MARSTRN.F.
Potential reduction of energy consumption in public university library
NASA Astrophysics Data System (ADS)
Noranai, Z.; Azman, ADF
2017-09-01
Efficient electrical energy usage has been recognized as one of the important factor to reduce cost of electrical energy consumption. Various parties have been emphasized about the importance of using electrical energy efficiently. Inefficient usage of electrical energy usage lead to biggest factor increasing of administration cost in Universiti Tun Hussein Onn Malaysia. With this in view, a project the investigate potential reduction electrical energy consumption in Universiti Tun Hussein Onn Malaysia was carried out. In this project, a case study involving electrical energy consumption of Perpustakaan Tunku Tun Aminah was conducted. The scopes of this project are to identify energy consumption in selected building and to find the factors that contributing to wastage of electrical energy. The MS1525:2001, Malaysian Standard - Code of practice on energy efficiency and use of renewable energy for non-residential buildings was used as reference. From the result, 4 saving measure had been proposed which is change type of the lamp, install sensor, decrease the number of lamp and improve shading coefficient on glass. This saving measure is suggested to improve the efficiency of electrical energy consumption. Improve of human behaviour toward saving energy measure can reduce 10% from the total of saving cost while on building technical measure can reduce 90% from total saving cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Gowans, Dakers; Telarico, Chad
The Commercial and Industrial Lighting Evaluation Protocol (the protocol) describes methods to account for gross energy savings resulting from the programmatic installation of efficient lighting equipment in large populations of commercial, industrial, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. A separate Uniform Methods Project (UMP) protocol, Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol, addresses methods for evaluating savings resulting from lighting control measures such as adding time clocks, tuning energy management system commands, and adding occupancy sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Mendon, Vrushali V.; Goel, Supriya
2012-06-01
The 2009 and 2012 International Energy Conservation Codes (IECC) require a substantial improvement in energy efficiency compared to the 2006 IECC. This report averages the energy use savings for a typical new residential dwelling unit based on the 2009 and 2012 IECC compared to the 2006 IECC. Results are reported by the eight climate zones in the IECC and for the national average.
DEGAS: Dynamic Exascale Global Address Space Programming Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less
Design of a tubular skylight system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, B.L.
1996-10-01
Since its introduction to the US market in 1991, tubular skylight provides a solution to the problem of lighting up dark corners in a house. Over the years, design of similar products has emphasized on quantity alone and attention to a range of other equally important issues: efficient collecting system, selection of higher specular reflectance material, seals, distribution and quality of light, was not noted. In this paper, the fundamental design concept of an efficient tubular skylight and the possibility of collimating diffuse light is reviewed. The importance of specular reflectance of the tube material on the performance of tubularmore » skylight is demonstrated. Visual appearance (quality) of transmitted light down the tube is related in part to the yellowness index of various materials. Discussion of adequacy of current building and energy code requirements on tubular skylights is briefly touched on and energy simulation results based on a numerical code are presented.« less
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560
NASA Astrophysics Data System (ADS)
Blum, Volker
This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erez, Mattan; Yelick, Katherine; Sarkar, Vivek
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less
Transport calculations and accelerator experiments needed for radiation risk assessment in space.
Sihver, Lembit
2008-01-01
The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.
SIP Shear Walls: Cyclic Performance of High-Aspect-Ratio Segments and Perforated Walls
Vladimir Kochkin; Douglas R. Rammer; Kevin Kauffman; Thomas Wiliamson; Robert J. Ross
2015-01-01
Increasing stringency of energy codes and the growing market demand for more energy efficient buildings gives structural insulated panel (SIP) construction an opportunity to increase its use in commercial and residential buildings. However, shear wall aspect ratio limitations and lack of knowledge on how to design SIPs with window and door openings are barriers to the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, Sara; Rothgeb, Stacey; Polly, Ben
The U.S. Department of Energy (DOE) Building America Program enables the transformation of the U.S. housing industry to achieve energy savings through energy-efficient, high-performance homes with improved durability, comfort, and health for occupants. Building America bridges the gap between the development of emerging technologies and the adoption of codes and standards by engaging industry partners in applied research, development, and demonstration of high-performance solutions.
HZETRN: A heavy ion/nucleon transport code for space radiations
NASA Technical Reports Server (NTRS)
Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.
1991-01-01
The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vine, E.
1990-11-01
As part of Lawrence Berkeley Laboratory's (LBL) technical assistance to the Sustainable City Project, compliance and enforcement activities related to local and state building codes for existing and new construction were evaluated in two case studies. The analysis of the City of San Francisco's Residential Energy Conservation Ordinance (RECO) showed that a limited, prescriptive energy conservation ordinance for existing residential construction can be enforced relatively easily with little administrative costs, and that compliance with such ordinances can be quite high. Compliance with the code was facilitated by extensive publicity, an informed public concerned with the cost of energy and knowledgeablemore » about energy efficiency, the threat of punishment (Order of Abatement), the use of private inspectors, and training workshops for City and private inspectors. The analysis of California's Title 24 Standards for new residential and commercial construction showed that enforcement of this type of code for many climate zones is more complex and requires extensive administrative support for education and training of inspectors, architects, engineers, and builders. Under this code, prescriptive and performance approaches for compliance are permitted, resulting in the demand for alternative methods of enforcement: technical assistance, plan review, field inspection, and computer analysis. In contrast to existing construction, building design and new materials and construction practices are of critical importance in new construction, creating a need for extensive technical assistance and extensive interaction between enforcement personnel and the building community. Compliance problems associated with building design and installation did occur in both residential and nonresidential buildings. Because statewide codes are enforced by local officials, these problems may increase over time as energy standards change and become more complex and as other standards (eg, health and safety codes) remain a higher priority. The California Energy Commission realizes that code enforcement by itself is insufficient and expects that additional educational and technical assistance efforts (eg, manuals, training programs, and toll-free telephone lines) will ameliorate these problems.« less
Small Changes Yield Large Results at NIST's Net-Zero Energy Residential Test Facility.
Fanney, A Hunter; Healy, William; Payne, Vance; Kneifel, Joshua; Ng, Lisa; Dougherty, Brian; Ullah, Tania; Omar, Farhad
2017-12-01
The Net-Zero Energy Residential Test Facility (NZERTF) was designed to be approximately 60 % more energy efficient than homes meeting the 2012 International Energy Conservation Code (IECC) requirements. The thermal envelope minimizes heat loss/gain through the use of advanced framing and enhanced insulation. A continuous air/moisture barrier resulted in an air exchange rate of 0.6 air changes per hour at 50 Pa. The home incorporates a vast array of extensively monitored renewable and energy efficient technologies including an air-to-air heat pump system with a dedicated dehumidification cycle; a ducted heat-recovery ventilation system; a whole house dehumidifier; a photovoltaic system; and a solar domestic hot water system. During its first year of operation the NZERTF produced an energy surplus of 1023 kWh. Based on observations during the first year, changes were made to determine if further improvements in energy performance could be obtained. The changes consisted of installing a thermostat that incorporated control logic to minimize the use of auxiliary heat, using a whole house dehumidifier in lieu of the heat pump's dedicated dehumidification cycle, and reducing the ventilation rate to a value that met but did not exceed code requirements. During the second year of operation the NZERTF produced an energy surplus of 2241 kWh. This paper describes the facility, compares the performance data for the two years, and quantifies the energy impact of the weather conditions and operational changes.
Implementing energy standards for motors and buildings in the Philippines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiel, S.; Busch, J.; Sanchez, C.
1998-07-01
The Philippines' master plan for energy makes cornerstones of energy standards for appliances, buildings, and motors in their energy efficiency effort. Significant progress has been made in implementing appliance standards for some products, but has lagged for others. This has been partly because the resources allocated have dictated a cautious deliberate approach. Products where there has been a lack of information about the respective markets have received lowest priority. Motors fall in this latter category. In their development of building codes, the Philippine government has also taken a cautious deliberate approach and is just now attending to the compliance ofmore » a commercial building energy performance standard that was enacted into law in 1994. This paper describes the results of recent new buildings and motor market assessments carried out in the Philippines, a survey of building energy code implementation in other countries, and how these products are being used to further implementation of energy standards in the Philippines. Lessons for other countries are drawn from this experience.« less
NASA Astrophysics Data System (ADS)
Werner, Brian Thomas
Composite structures have long been used in many industries where it is advantageous to reduce weight while maintaining high stiffness and strength. Composites can now be found in an ever broadening range of applications: sporting equipment, automobiles, marine and aerospace structures, and energy production. These structures are typically sandwich panels composed of fiber reinforced polymer composite (FRPC) facesheets which provide the stiffness and the strength and a low density polymeric foam core that adds bending rigidity with little additional weight. The expanding use of composite structures exposes them to high energy, high velocity dynamic loadings which produce multi-axial dynamic states of stress. This circumstance can present quite a challenge to designers, as composite structures are highly anisotropic and display properties that are sensitive to loading rates. Computer codes are continually in development to assist designers in the creation of safe, efficient structures. While the design of an optimal composite structure is more complex, engineers can take advantage of the effect of enhanced energy dissipation displayed by a composite when loaded at high strain rates. In order to build and verify effective computer codes, the underlying assumptions must be verified by laboratory experiments. Many of these codes look to use a micromechanical approach to determine the response of the structure. For this, the material properties of the constituent materials must be verified, three-dimensional constitutive laws must be developed, and failure of these materials must be investigated under static and dynamic loading conditions. In this study, simple models are sought not only to ease their implementation into such codes, but to allow for efficient characterization of new materials that may be developed. Characterization of composite materials and sandwich structures is a costly, time intensive process. A constituent based design approach evaluates potential combinations of materials in a much faster and more efficient manner.
Construction, classification and parametrization of complex Hadamard matrices
NASA Astrophysics Data System (ADS)
Szöllősi, Ferenc
To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.
Through the Past Decade: How Advanced Energy Design Guides have influenced the Design Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Bing; Athalye, Rahul A.
Advanced Energy Design Guides (AEDGs) were originally developed intended to provide a simple approach to building professionals seeking energy efficient building designs better than ASHRAE Standard 90.1. Since its first book was released in 2004, the AEDG series provided inspiration for the design industry and were seen by designers as a starting point for buildings that wished to go beyond minimum codes and standards. In addition, U.S. Department of Energy’s successful Commercial Building Partnerships (CBP) program leveraged many of the recommendations from the AEDGs to achieve 50% energy savings over ASHRAE Standard 90.1-2004 for prototypical designs of large commercial entitiesmore » in the retail, banking and lodging sectors. Low-energy technologies and strategies developed during the CBP process have been applied by commercial partners throughout their national portfolio of buildings. Later, the AEDGs served as the perfect platform for both Standard 90.1 and ASHRAE’s high performance buildings standard, Standard 189.1. What was high performance a few years ago, however, has become minimum code today. Indeed, most of the prescriptive envelope component requirements in ASHRAE Standard 90.1-2013 are values recommended in the 50% AEDGs several years ago. Similarly, AEDG strategies and recommendations have penetrated the lighting and HVAC sections of both Standard 189.1 and Standard 90.1. Finally, as we look to the future of codes and standards, the AEDGs are serving as a blueprint for how minimum code requirements could be expressed. By customizing codes to specific building types, design strategies tailored for individual buildings could be prescribed as minimum code, just like in the AEDGs. This paper describes the impact that AEDGs have had over the last decade on the design industry and how they continue to influence the future of codes and Standards. From design professionals to code officials, everyone in the building industry has been affected by the AEDGs.« less
Data on European non-residential buildings.
D'Agostino, Delia; Cuniberti, Barbara; Bertoldi, Paolo
2017-10-01
This data article relates to the research paper Energy consumption and efficiency technology measures in European non-residential buildings (D'Agostino et al., 2017) [1]. The reported data have been collected in the framework of the Green Building Programme that ran from 2006 to 2014. The project has encouraged the adoption of efficiency measures to boost energy savings in European non-residential buildings. Data focus on the one-thousand buildings that joined the Programme allowing to save around 985 GWh/year. The main requirement to join the Programme was the reduction of at least 25% primary energy consumption in a new or retrofitted building. Energy consumption before and after the renovation are provided for retrofitted buildings while, in new constructions, a building had to be designed using at least 25% less energy than requested by the country's building codes. The following data are linked within this article: energy consumption, absolute and relative savings related to primary energy, saving percentages, implemented efficiency measures and renewables. Further information is given about each building in relation to geometry, envelope, materials, lighting and systems.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions ofmore » a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.« less
A network coding based routing protocol for underwater sensor networks.
Wu, Huayang; Chen, Min; Guan, Xin
2012-01-01
Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime.
A Network Coding Based Routing Protocol for Underwater Sensor Networks
Wu, Huayang; Chen, Min; Guan, Xin
2012-01-01
Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime. PMID:22666045
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yunzhi; Gowri, Krishnan
2011-02-28
This report summarizes code requirements and energy savings of commercial buildings in Climate Zone 2B built to the 2009 IECC and ASHRAE Standard 90.1-2007 when compared to the 2003 IECC and the 2006 IECC. In general, the 2009 IECC and ASHRAE Standard 90.1-2007 have higher insulation requirements for exterior walls, roof, and windows and have higher efficiency requirements for HVAC equipment. HVAC equipment efficiency requirements are governed by National Appliance Conversion Act of 1987 (NAECA), and are applicable irrespective of the IECC version adopted. The energy analysis results show that commercial buildings meeting the 2009 IECC requirements save 4.4% tomore » 9.5% site energy and 4.1% to 9.9% energy cost when compared to the 2006 IECC; and save 10.6% to 29.4% site energy and 10.3% to 29.3% energy cost when compared to the 2003 IECC. Similar analysis comparing ASHRAE Standard 90.1-2007 requirements to the 2006 IECC shows that the energy savings are in the 4.0% to 10.7% for multi-family and retail buildings, but less than 2% for office buildings. Further comparison of ASHRAE Standard 90.1-2007 requirements to the 2003 IECC show site energy savings in the range of 7.7% to 30.6% and energy cost savings range from 7.9% to 30.3%. Both the 2009 IECC and ASHRAE Standard 90.1-2007 have the potential to save energy by comparable levels for most building types.« less
Vladimirov, N V; Likhoshvaĭ, V A; Matushkin, Iu G
2007-01-01
Gene expression is known to correlate with degree of codon bias in many unicellular organisms. However, such correlation is absent in some organisms. Recently we demonstrated that inverted complementary repeats within coding DNA sequence must be considered for proper estimation of translation efficiency, since they may form secondary structures that obstruct ribosome movement. We have developed a program for estimation of potential coding DNA sequence expression in defined unicellular organism using its genome sequence. The program computes elongation efficiency index. Computation is based on estimation of coding DNA sequence elongation efficiency, taking into account three key factors: codon bias, average number of inverted complementary repeats, and free energy of potential stem-loop structures formed by the repeats. The influence of these factors on translation is numerically estimated. An optimal proportion of these factors is computed for each organism individually. Quantitative translational characteristics of 384 unicellular organisms (351 bacteria, 28 archaea, 5 eukaryota) have been computed using their annotated genomes from NCBI GenBank. Five potential evolutionary strategies of translational optimization have been determined among studied organisms. A considerable difference of preferred translational strategies between Bacteria and Archaea has been revealed. Significant correlations between elongation efficiency index and gene expression levels have been shown for two organisms (S. cerevisiae and H. pylori) using available microarray data. The proposed method allows to estimate numerically the coding DNA sequence translation efficiency and to optimize nucleotide composition of heterologous genes in unicellular organisms. http://www.mgs.bionet.nsc.ru/mgs/programs/eei-calculator/.
Development of the Off-line Analysis Code for GODDESS
NASA Astrophysics Data System (ADS)
Garland, Heather; Cizewski, Jolie; Lepailleur, Alex; Walters, David; Pain, Steve; Smith, Karl
2016-09-01
Determining (n, γ) cross sections on unstable nuclei is important for understanding the r-process that is theorized to occur in supernovae and neutron-star mergers. However, (n, γ) reactions are difficult to measure directly because of the short lifetime of the involved neutron rich nuclei. A possible surrogate for the (n, γ) reaction is the (d,p γ) reaction; the measurement of these reactions in inverse kinematics is part of the scope of GODDESS - Gammasphere ORRUBA (Oak Ridge Rutgers University Barrel Array): Dual Detectors for Experimental Structure Studies. The development of an accurate and efficient off-line analysis code for GODDESS experiments is not only essential, but also provides a unique opportunity to create an analysis code designed specifically for transfer reaction experiments. The off-line analysis code has been developed to produce histograms from the binary data file to determine how to best sort events. Recent developments in the off-line analysis code will be presented as well as details on the energy and position calibrations for the ORRUBA detectors. This work is supported in part by the U.S. Department of Energy and National Science Foundation.
Comparison of EGS4 and MCNP Monte Carlo codes when calculating radiotherapy depth doses.
Love, P A; Lewis, D G; Al-Affan, I A; Smith, C W
1998-05-01
The Monte Carlo codes EGS4 and MCNP have been compared when calculating radiotherapy depth doses in water. The aims of the work were to study (i) the differences between calculated depth doses in water for a range of monoenergetic photon energies and (ii) the relative efficiency of the two codes for different electron transport energy cut-offs. The depth doses from the two codes agree with each other within the statistical uncertainties of the calculations (1-2%). The relative depth doses also agree with data tabulated in the British Journal of Radiology Supplement 25. A discrepancy in the dose build-up region may by attributed to the different electron transport algorithims used by EGS4 and MCNP. This discrepancy is considerably reduced when the improved electron transport routines are used in the latest (4B) version of MCNP. Timing calculations show that EGS4 is at least 50% faster than MCNP for the geometries used in the simulations.
76 FR 42688 - Updating State Residential Building Energy Efficiency Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... 19, 2013. ADDRESSES: Certification Statements must be addressed to the Buildings Technologies Program...-rise (greater than three stories) multifamily residential buildings and hotel, motel, and other..., townhouses, row houses, and low-rise multifamily buildings (not greater than three stories) such as...
A Study of Neutron Leakage in Finite Objects
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deru, Michael; Field-Macumber, Kristin
This document provides guidance for modeling and inspecting energy-efficient property in commercial buildings for certification of the energy and power cost savings related to Section 179D of the Internal Revenue Code (IRC) enacted in Section 1331 of the 2005 Energy Policy Act (EPAct) of 2005, noted in Internal Revenue Service (IRS) Notices 2006-52 (IRS 2006), 2008-40 (IRS 2008) and 2012-26 (IRS 2012), and updated by the Protecting Americans from Tax Hikes (PATH) Act of 2015. Specifically, Section 179D provides federal tax deductions for energy-efficient property related to a commercial building's envelope; interior lighting; heating, ventilating, and air conditioning (HVAC); andmore » service hot water (SHW) systems. This document applies to buildings placed in service on or after January 1, 2016.« less
Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.
This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
New, Joshua Ryan; Kumar, Jitendra; Hoffman, Forrest M.
Statement of the Problem: ASHRAE releases updates to 90.1 “Energy Standard for Buildings except Low-Rise Residential Buildings” every three years resulting in a 3.7%-17.3% increase in energy efficiency for buildings with each release. This is adopted by or informs building codes in nations across the globe, is the National Standard for the US, and individual states elect which release year of the standard they will enforce. These codes are built upon Standard 169 “Climatic Data for Building Design Standards,” the latest 2017 release of which defines climate zones based on 8, 118 weather stations throughout the world and data frommore » the past 8-25 years. This data may not be indicative of the weather that new buildings built today, will see during their upcoming 30-120 year lifespan. Methodology & Theoretical Orientation: Using more modern, high-resolution datasets from climate satellites, IPCC climate models (PCM and HadGCM), high performance computing resources (Titan) and new capabilities for clustering and optimization the authors briefly analyzed different methods for redefining climate zones. Using bottom-up analysis of multiple meteorological variables which were the subject matter, experts selected as being important to energy consumption, rather than the heating/cooling degree days currently used. Findings: We analyzed the accuracy of redefined climate zones, compared to current climate zones and how the climate zones moved under different climate change scenarios, and quantified the accuracy of these methods on a local level, at a national scale for the US. Conclusion & Significance: There is likely to be a significant annual, national energy and cost (billions USD) savings that could be realized by adjusting climate zones to take into account anticipated trends or scenarios in regional weather patterns.« less
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vine, E.
1990-11-01
As part of Lawrence Berkeley Laboratory's (LBL) technical assistance to the Sustainable City Project, compliance and enforcement activities related to local and state building codes for existing and new construction were evaluated in two case studies. The analysis of the City of San Francisco's Residential Energy Conservation Ordinance (RECO) showed that a limited, prescriptive energy conservation ordinance for existing residential construction can be enforced relatively easily with little administrative costs, and that compliance with such ordinances can be quite high. Compliance with the code was facilitated by extensive publicity, an informed public concerned with the cost of energy and knowledgeablemore » about energy efficiency, the threat of punishment (Order of Abatement), the use of private inspectors, and training workshops for City and private inspectors. The analysis of California's Title 24 Standards for new residential and commercial construction showed that enforcement of this type of code for many climate zones is more complex and requires extensive administrative support for education and training of inspectors, architects, engineers, and builders. Under this code, prescriptive and performance approaches for compliance are permitted, resulting in the demand for alternative methods of enforcement: technical assistance, plan review, field inspection, and computer analysis. In contrast to existing to construction, building design and new materials and construction practices are of critical importance in new construction, creating a need for extensive technical assistance and extensive interaction between enforcement personnel and the building community. Compliance problems associated with building design and installation did occur in both residential and nonresidential buildings. 12 refs., 5 tabs.« less
ANSI/ASHRAE/IES Standard 90.1-2013 Preliminary Determination: Qualitative Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halverson, Mark A.; Hart, Reid; Athalye, Rahul A.
2014-03-01
Section 304(b) of the Energy Conservation and Production Act (ECPA), as amended, requires the Secretary of Energy to make a determination each time a revised version of ASHRAE Standard 90.1 is published with respect to whether the revised standard would improve energy efficiency in commercial buildings. When the U.S. Department of Energy (DOE) issues an affirmative determination on Standard 90.1, states are statutorily required to certify within two years that they have reviewed and updated the commercial provisions of their building energy code, with respect to energy efficiency, to meet or exceed the revised standard. This report provides a preliminarymore » qualitative analysis of all addenda to ANSI/ASHRAE/IES Standard 90.1-2010 (referred to as Standard 90.1-2010 or 2010 edition) that were included in ANSI/ASHRAE/IES Standard 90.1-2013 (referred to as Standard 90.1-2013 or 2013 edition).« less
NASA Astrophysics Data System (ADS)
Apostol, A. I.; Pantelica, A.; Sima, O.; Fugaru, V.
2016-09-01
Non-destructive methods were applied to determine the isotopic composition and the time elapsed since last chemical purification of nine uranium samples. The applied methods are based on measuring gamma and X radiations of uranium samples by high resolution low energy gamma spectrometric system with planar high purity germanium detector and low background gamma spectrometric system with coaxial high purity germanium detector. The ;Multigroup γ-ray Analysis Method for Uranium; (MGAU) code was used for the precise determination of samples' isotopic composition. The age of the samples was determined from the isotopic ratio 214Bi/234U. This ratio was calculated from the analyzed spectra of each uranium sample, using relative detection efficiency. Special attention is paid to the coincidence summing corrections that have to be taken into account when performing this type of analysis. In addition, an alternative approach for the age determination using full energy peak efficiencies obtained by Monte Carlo simulations with the GESPECOR code is described.
NASA Technical Reports Server (NTRS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-01-01
In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.
Evaluation of Savings in Energy-Efficient Public Housing in the Pacific Northwest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, A.; Lubliner, M.; Howard, L.
2013-10-01
This report presents the results of an energy performance and cost-effectiveness analysis. The Salishan phase 7 and demonstration homes were compared to Salishan phase 6 homes built to 2006 Washington State Energy Code specifications 2. Predicted annual energy savings (over Salishan phase 6) was 19% for Salishan phase 7, and between 19-24% for the demonstration homes (depending on ventilation strategy). Approximately two-thirds of the savings are attributable to the DHP. Working with the electric utility provider, Tacoma Public Utilities, researchers conducted a billing analysis for Salishan phase 7.
Schneider, Thomas D
2010-10-01
The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is [Formula in text] joules per bit (kB is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2=0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal.
Schneider, Thomas D.
2010-01-01
The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is joules per bit ( is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2 = 0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal. PMID:20562221
Building America Top Innovations 2012: Building Science-Based Climate Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2013-01-01
This Building America Top Innovations profile describes the Building America-developed climate zone map, which serves as a consistent framework for energy-efficiency requirements in the national model energy code starting with the 2004 IECC Supplement and the ASHRAE 90.1 2004 edition. The map also provides a critical foundation for climate-specific guidance in the widely disseminated EEBA Builder Guides and Building America Best Practice Guides.
Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian
2018-05-10
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%.
An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks
Yu, Shidi; Liu, Xiao; Cai, Zhiping; Wang, Tian
2018-01-01
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%. PMID:29748525
Towards energy-efficient nonoscillatory forward-in-time integrations on lat-lon grids
NASA Astrophysics Data System (ADS)
Polkowski, Marcin; Piotrowski, Zbigniew; Ryczkowski, Adam
2017-04-01
The design of the next-generation weather prediction models calls for new algorithmic approaches allowing for robust integrations of atmospheric flow over complex orography at sub-km resolutions. These need to be accompanied by efficient implementations exposing multi-level parallelism, capable to run on modern supercomputing architectures. Here we present the recent advances in the energy-efficient implementation of the consistent soundproof/implicit compressible EULAG dynamical core of the COSMO weather prediction framework. Based on the experiences of the atmospheric dwarfs developed within H2020 ESCAPE project, we develop efficient, architecture agnostic implementations of fully three-dimensional MPDATA advection schemes and generalized diffusion operator in curvilinear coordinates and spherical geometry. We compare optimized Fortran implementation with preliminary C++ implementation employing the Gridtools library, allowing for integrations on CPU and GPU while maintaining single source code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Wei
2015-07-21
The question of the energy composition of the jets/outflows in high-energy astrophysical systems, e.g. GRBs, AGNs, is taken up first: Matter-flux-dominated (MFD), σ < 1, and/or Poynting-flux-dominated (PFD), σ >1? The standard fireball IS model and dissipative photosphere model are MFD, while the ICMART (Internal-Collision-induced MAgnetic Reconnection and Turbulence) model is PFD. Motivated by ICMART model and other relevant problems, such as “jets in a jet” model of AGNs, the author investigates the models from the EMF energy dissipation efficiency, relativistic outflow generation, and σ evolution points of view, and simulates collisions between high-σ blobs to mimic the situation ofmore » the interactions inside the PFD jets/outflows by using a 3D SRMHD code which solves the conservative form of the ideal MHD equations. σ b,f is calculated from the simulation results (threshold = 1). The efficiency obtained from this hybrid method is similar to the efficiency got from the energy evolution of the simulations (35.2%). Efficiency is nearly σ independent, which is also confirmed by the hybrid method. σ b,i - σ b,f provides an interesting linear relationship. Results of several parameter studies of EMF energy dissipation efficiency are shown.« less
Effects of recent energy system changes on CO2 projections for the United States.
Lenox, Carol S; Loughlin, Daniel H
2017-09-21
Recent projections of future United States carbon dioxide (CO 2 ) emissions are considerably lower than projections made just a decade ago. A myriad of factors have contributed to lower forecasts, including reductions in end-use energy service demands, improvements in energy efficiency, and technological innovations. Policies that have encouraged these changes include renewable portfolio standards, corporate vehicle efficiency standards, smart growth initiatives, revisions to building codes, and air and climate regulations. Understanding the effects of these and other factors can be advantageous as society evaluates opportunities for achieving additional CO 2 reductions. Energy system models provide a means to develop such insights. In this analysis, the MARKet ALlocation (MARKAL) model was applied to estimate the relative effects of various energy system changes that have happened since the year 2005 on CO 2 projections for the year 2025. The results indicate that transformations in the transportation and buildings sectors have played major roles in lowering projections. Particularly influential changes include improved vehicle efficiencies, reductions in projected travel demand, reductions in miscellaneous commercial electricity loads, and higher efficiency lighting. Electric sector changes have also contributed significantly to the lowered forecasts, driven by demand reductions, renewable portfolio standards, and air quality regulations.
NASA Astrophysics Data System (ADS)
Marcolongo, Juan P.; Zeida, Ari; Semelak, Jonathan A.; Foglia, Nicolás O.; Morzan, Uriel N.; Estrin, Dario A.; González Lebrero, Mariano C.; Scherlis, Damián A.
2018-03-01
In this work we present the current advances in the development and the applications of LIO, a lab-made code designed for density functional theory calculations in graphical processing units (GPU), that can be coupled with different classical molecular dynamics engines. This code has been thoroughly optimized to perform efficient molecular dynamics simulations at the QM/MM DFT level, allowing for an exhaustive sampling of the configurational space. Selected examples are presented for the description of chemical reactivity in terms of free energy profiles, and also for the computation of optical properties, such as vibrational and electronic spectra in solvent and protein environments.
NASA Astrophysics Data System (ADS)
Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian
2017-10-01
Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.
Lin, Hsin-Hon; Chuang, Keh-Shih; Lin, Yi-Hsing; Ni, Yu-Ching; Wu, Jay; Jan, Meei-Ling
2014-10-21
GEANT4 Application for Tomographic Emission (GATE) is a powerful Monte Carlo simulator that combines the advantages of the general-purpose GEANT4 simulation code and the specific software tool implementations dedicated to emission tomography. However, the detailed physical modelling of GEANT4 is highly computationally demanding, especially when tracking particles through voxelized phantoms. To circumvent the relatively slow simulation of voxelized phantoms in GATE, another efficient Monte Carlo code can be used to simulate photon interactions and transport inside a voxelized phantom. The simulation system for emission tomography (SimSET), a dedicated Monte Carlo code for PET/SPECT systems, is well-known for its efficiency in simulation of voxel-based objects. An efficient Monte Carlo workflow integrating GATE and SimSET for simulating pinhole SPECT has been proposed to improve voxelized phantom simulation. Although the workflow achieves a desirable increase in speed, it sacrifices the ability to simulate decaying radioactive sources such as non-pure positron emitters or multiple emission isotopes with complex decay schemes and lacks the modelling of time-dependent processes due to the inherent limitations of the SimSET photon history generator (PHG). Moreover, a large volume of disk storage is needed to store the huge temporal photon history file produced by SimSET that must be transported to GATE. In this work, we developed a multiple photon emission history generator (MPHG) based on SimSET/PHG to support a majority of the medically important positron emitters. We incorporated the new generator codes inside GATE to improve the simulation efficiency of voxelized phantoms in GATE, while eliminating the need for the temporal photon history file. The validation of this new code based on a MicroPET R4 system was conducted for (124)I and (18)F with mouse-like and rat-like phantoms. Comparison of GATE/MPHG with GATE/GEANT4 indicated there is a slight difference in energy spectra for energy below 50 keV due to the lack of x-ray simulation from (124)I decay in the new code. The spatial resolution, scatter fraction and count rate performance are in good agreement between the two codes. For the case studies of (18)F-NaF ((124)I-IAZG) using MOBY phantom with 1 × 1 × 1 mm(3) voxel sizes, the results show that GATE/MPHG can achieve acceleration factors of approximately 3.1 × (4.5 ×), 6.5 × (10.7 ×) and 9.5 × (31.0 ×) compared with GATE using the regular navigation method, the compressed voxel method and the parameterized tracking technique, respectively. In conclusion, the implementation of MPHG in GATE allows for improved efficiency of voxelized phantom simulations and is suitable for studying clinical and preclinical imaging.
New schemes for internally contracted multi-reference configuration interaction
NASA Astrophysics Data System (ADS)
Wang, Yubin; Han, Huixian; Lei, Yibo; Suo, Bingbing; Zhu, Haiyan; Song, Qi; Wen, Zhenyi
2014-10-01
In this work we present a new internally contracted multi-reference configuration interaction (MRCI) scheme by applying the graphical unitary group approach and the hole-particle symmetry. The latter allows a Distinct Row Table (DRT) to split into a number of sub-DRTs in the active space. In the new scheme a contraction is defined as a linear combination of arcs within a sub-DRT, and connected to the head and tail of the DRT through up-steps and down-steps to generate internally contracted configuration functions. The new scheme deals with the closed-shell (hole) orbitals and external orbitals in the same manner and thus greatly simplifies calculations of coupling coefficients and CI matrix elements. As a result, the number of internal orbitals is no longer a bottleneck of MRCI calculations. The validity and efficiency of the new ic-MRCI code are tested by comparing with the corresponding WK code of the MOLPRO package. The energies obtained from the two codes are essentially identical, and the computational efficiencies of the two codes have their own advantages.
Jackson Park Hospital Green Building Medical Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
William Dorsey; Nelson Vasquez
2010-03-31
Jackson Park Hospital completed the construction of a new Medical Office Building on its campus this spring. The new building construction has adopted the City of Chicago's recent focus on protecting the environment, and conserving energy and resources, with the introduction of green building codes. Located in a poor, inner city neighborhood on the South side of Chicago, Jackson Park Hospital has chosen green building strategies to help make the area a better place to live and work. The new green building houses the hospital's Family Medicine Residency Program and Specialty Medical Offices. The residency program has been vital inmore » attracting new, young physicians to this medically underserved area. The new outpatient center will also help to allure needed medical providers to the community. The facility also has areas designated to women's health and community education. The Community Education Conference Room will provide learning opportunities to area residents. Emphasis will be placed on conserving resources and protecting our environment, as well as providing information on healthcare access and preventive medicine. The new Medical Office Building was constructed with numerous energy saving features. The exterior cladding of the building is an innovative, locally-manufactured precast concrete panel system with integral insulation that achieves an R-value in excess of building code requirements. The roof is a 'green roof' covered by native plantings, lessening the impact solar heat gain on the building, and reducing air conditioning requirements. The windows are low-E, tinted, and insulated to reduce cooling requirements in summer and heating requirements in winter. The main entrance has an air lock to prevent unconditioned air from entering the building and impacting interior air temperatures. Since much of the traffic in and out of the office building comes from the adjacent Jackson Park Hospital, a pedestrian bridge connects the two buildings, further decreasing the amount of unconditioned air that enters the office building. The HVAC system has an Energy Efficiency Rating 29% greater than required. No CFC based refrigerants were used in the HVAC system, thus reducing the emission of compounds that contribute to ozone depletion and global warming. In addition, interior light fixtures employ the latest energy-efficient lamp and ballast technology. Interior lighting throughout the building is operated by sensors that will automatically turn off lights inside a room when the room is unoccupied. The electrical traction elevators use less energy than typical elevators, and they are made of 95% recycled material. Further, locally manufactured products were used throughout, minimizing the amount of energy required to construct this building. The primary objective was to construct a 30,000 square foot medical office building on the Jackson Park Hospital campus that would comply with newly adopted City of Chicago green building codes focusing on protecting the environment and conserving energy and resources. The energy saving systems demonstrate a state of the-art whole-building approach to energy efficient design and construction. The energy efficiency and green aspects of the building contribute to the community by emphasizing the environmental and economic benefits of conserving resources. The building highlights the integration of Chicago's new green building codes into a poor, inner city neighborhood project and it is designed to attract medical providers and physicians to a medically underserved area.« less
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node
Cai, Zhipeng; Zou, Fumin; Zhang, Xiangyu
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption. PMID:29599945
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.
Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Low-Latency and Energy-Efficient Data Preservation Mechanism in Low-Duty-Cycle Sensor Networks.
Jiang, Chan; Li, Tao-Shen; Liang, Jun-Bin; Wu, Heng
2017-05-06
Similar to traditional wireless sensor networks (WSN), the nodes only have limited memory and energy in low-duty-cycle sensor networks (LDC-WSN). However, different from WSN, the nodes in LDC-WSN often sleep most of their time to preserve their energies. The sleeping feature causes serious data transmission delay. However, each source node that has sensed data needs to quickly disseminate its data to other nodes in the network for redundant storage. Otherwise, data would be lost due to its source node possibly being destroyed by outer forces in a harsh environment. The quick dissemination requirement produces a contradiction with the sleeping delay in the network. How to quickly disseminate all the source data to all the nodes with limited memory in the network for effective preservation is a challenging issue. In this paper, a low-latency and energy-efficient data preservation mechanism in LDC-WSN is proposed. The mechanism is totally distributed. The data can be disseminated to the network with low latency by using a revised probabilistic broadcasting mechanism, and then stored by the nodes with LT (Luby Transform) codes, which are a famous rateless erasure code. After the process of data dissemination and storage completes, some nodes may die due to being destroyed by outer forces. If a mobile sink enters the network at any time and from any place to collect the data, it can recover all of the source data by visiting a small portion of survived nodes in the network. Theoretical analyses and simulation results show that our mechanism outperforms existing mechanisms in the performances of data dissemination delay and energy efficiency.
Plasma separation process. Betacell (BCELL) code, user's manual
NASA Astrophysics Data System (ADS)
Taherzadeh, M.
1987-11-01
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.
75 FR 54131 - Updating State Residential Building Energy Efficiency Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-03
... and 95 degrees F for heating (for heat pumps), the 2000 IECC insulation requirement for supply ducts in unconditioned spaces is R-5 (minimum) for nearly all cases. Insulation required by the 2000 IECC... Duct Insulation Requirements Duct insulation requirements generally increased in the 2003 IECC. The...
Nikezic, D; Shahmohammadi Beni, Mehrdad; Krstic, D; Yu, K N
2016-01-01
Monte Carlo method has been used to determine the efficiency for proton production and to study the energy and angular distributions of the generated protons. The ENDF library of cross sections is used to simulate the interactions between the neutrons and the atoms in a polyethylene (PE) layer, while the ranges of protons with different energies in PE are determined using the Stopping and Range of Ions in Matter (SRIM) computer code. The efficiency of proton production increases with the PE layer thickness. However the proton escaping from a certain polyethylene volume is highly dependent on the neutron energy and target thickness, except for a very thin PE layer. The energy and angular distributions of protons are also estimated in the present paper, showing that, for the range of energy and thickness considered, the proton flux escaping is dependent on the PE layer thickness, with the presence of an optimal thickness for a fixed primary neutron energy.
Nikezic, D.; Shahmohammadi Beni, Mehrdad; Krstic, D.; Yu, K. N.
2016-01-01
Monte Carlo method has been used to determine the efficiency for proton production and to study the energy and angular distributions of the generated protons. The ENDF library of cross sections is used to simulate the interactions between the neutrons and the atoms in a polyethylene (PE) layer, while the ranges of protons with different energies in PE are determined using the Stopping and Range of Ions in Matter (SRIM) computer code. The efficiency of proton production increases with the PE layer thickness. However the proton escaping from a certain polyethylene volume is highly dependent on the neutron energy and target thickness, except for a very thin PE layer. The energy and angular distributions of protons are also estimated in the present paper, showing that, for the range of energy and thickness considered, the proton flux escaping is dependent on the PE layer thickness, with the presence of an optimal thickness for a fixed primary neutron energy. PMID:27362656
Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bekar, Kursat B.; Ibrahim, Ahmad M.
2017-05-01
This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less
COMPTEL neutron response at 17 MeV
NASA Technical Reports Server (NTRS)
Oneill, Terrence J.; Ait-Ouamer, Farid; Morris, Joann; Tumer, O. Tumay; White, R. Stephen; Zych, Allen D.
1992-01-01
The Compton imaging telescope (COMPTEL) instrument of the Gamma Ray Observatory was exposed to 17 MeV d,t neutrons prior to launch. These data were analyzed and compared with Monte Carlo calculations using the MCNP(LANL) code. Energy and angular resolutions are compared and absolute efficiencies are calculated at 0 and 30 degrees incident angle. The COMPTEL neutron responses at 17 MeV and higher energies are needed to understand solar flare neutron data.
Dimitrov, I. K.; Zhang, X.; Solovyov, V. F.; ...
2015-07-07
Recent advances in second-generation (YBCO) high-temperature superconducting wire could potentially enable the design of super high performance energy storage devices that combine the high energy density of chemical storage with the high power of superconducting magnetic storage. However, the high aspect ratio and the considerable filament size of these wires require the concomitant development of dedicated optimization methods that account for the critical current density in type-II superconductors. In this study, we report on the novel application and results of a CPU-efficient semianalytical computer code based on the Radia 3-D magnetostatics software package. Our algorithm is used to simulate andmore » optimize the energy density of a superconducting magnetic energy storage device model, based on design constraints, such as overall size and number of coils. The rapid performance of the code is pivoted on analytical calculations of the magnetic field based on an efficient implementation of the Biot-Savart law for a large variety of 3-D “base” geometries in the Radia package. The significantly reduced CPU time and simple data input in conjunction with the consideration of realistic input variables, such as material-specific, temperature, and magnetic-field-dependent critical current densities, have enabled the Radia-based algorithm to outperform finite-element approaches in CPU time at the same accuracy levels. Comparative simulations of MgB 2 and YBCO-based devices are performed at 4.2 K, in order to ascertain the realistic efficiency of the design configurations.« less
Hester, Nathan; Li, Ke; Schramski, John R; Crittenden, John
2012-04-30
Globally, residential energy consumption continues to rise due to a variety of trends such as increasing access to modern appliances, overall population growth, and the overall increase of electricity distribution. Currently, residential energy consumption accounts for approximately one-fifth of total U.S. energy consumption. This research analyzes the effectiveness of a range of energy-saving measures for residential houses in semi-arid climates. These energy-saving measures include: structural insulated panels (SIP) for exterior wall construction, daylight control, increased window area, efficient window glass suitable for the local weather, and several combinations of these. Our model determined that energy consumption is reduced by up to 6.1% when multiple energy savings technologies are combined. In addition, pre-construction technologies (structural insulated panels (SIPs), daylight control, and increased window area) provide roughly 4 times the energy savings when compared to post-construction technologies (window blinds and efficient window glass). The model also illuminated the importance variations in local climate and building configuration; highlighting the site-specific nature of this type of energy consumption quantification for policy and building code considerations. Published by Elsevier Ltd.
The limited role of recombination energy in common envelope removal
NASA Astrophysics Data System (ADS)
Grichener, Aldana; Sabach, Efrat; Soker, Noam
2018-05-01
We calculate the outward energy transport time by convection and photon diffusion in an inflated common envelope and find this time to be shorter than the envelope expansion time. We conclude therefore that most of the hydrogen recombination energy ends in radiation rather than in kinetic energy of the outflowing envelope. We use the stellar evolution code MESA and inject energy inside the envelope of an asymptotic giant branch star to mimic energy deposition by a spiraling-in stellar companion. During 1.7 years the envelope expands by a factor of more than 2. Along the entire evolution the convection can carry the energy very efficiently outwards, to the radius where radiative transfer becomes more efficient. The total energy transport time stays within several months, shorter than the dynamical time of the envelope. Had we included rapid mass loss, as is expected in the common envelope evolution, the energy transport time would have been even shorter. It seems that calculations that assume that most of the recombination energy ends in the outflowing gas might be inaccurate.
Quantitative Kα line spectroscopy for energy transport in ultra-intense laser plasma interaction
NASA Astrophysics Data System (ADS)
Zhang, Z.; Nishimura, H.; Namimoto, T.; Fujioka, S.; Arikawa, Y.; Nakai, M.; Koga, M.; Shiraga, H.; Kojima, S.; Azechi, H.; Ozaki, T.; Chen, H.; Pakr, J.; Williams, G. J.; Nishikino, M.; Kawachi, T.; Sagisaka, A.; Orimo, S.; Ogura, K.; Pirozhkov, A.; Yogo, A.; Kiriyama, H.; Kondo, K.; Okano, Y.
2012-10-01
X-ray line spectra ranging from 17 to 77 keV were quantitatively measured with a Laue spectrometer, composed of a cylindrically curved crystal and a detector. The absolute sensitivity of the spectrometer system was calibrated using pre-characterized laser-produced x-ray sources and radioisotopes, for the detectors and crystal respectively. The integrated reflectivity for the crystal is in good agreement with predictions by an open code for x-ray diffraction. The energy transfer efficiency from incident laser beams to hot electrons, as the energy transfer agency for Au Kα x-ray line emissions, is derived as a consequence of this work. By considering the hot electron temperature, the transfer efficiency from LFEX laser to Au plate target is about 8% to 10%.
Technology change and energy consumption: A comparison of residential subdivisions
NASA Astrophysics Data System (ADS)
Nieves, L. A.; Nieves, A. L.
The energy savings in residential buildings likely to result from implementation of the building energy performance standards (BEPS) were assessed. The goals were to: compare energy use in new homes designed to meet or exceed BEPS levels of energy efficiency with that in similar but older homes designed to meet conventional building codes, and to survey the home owners regarding their energy conservation attitudes and behaviors and to ascertain the degree to which conservation attitudes and behaviors are related to residential energy use. The consumer demand theory which provides the framework for the empirical analysis is presented. The sample residences are described and the data collection method discussed. The definition and measurement of major variables are presented.
Photon Throughput Calculations for a Spherical Crystal Spectrometer
NASA Astrophysics Data System (ADS)
Gilman, C. J.; Bitter, M.; Delgado-Aparicio, L.; Efthimion, P. C.; Hill, K.; Kraus, B.; Gao, L.; Pablant, N.
2017-10-01
X-ray imaging crystal spectrometers of the type described in Refs. have become a standard diagnostic for Doppler measurements of profiles of the ion temperature and the plasma flow velocities in magnetically confined, hot fusion plasmas. These instruments have by now been implemented on major tokamak and stellarator experiments in Korea, China, Japan, and Germany and are currently also being designed by PPPL for ITER. A still missing part in the present data analysis is an efficient code for photon throughput calculations to evaluate the chord-integrated spectral data. The existing ray tracing codes cannot be used for a data analysis between shots, since they require extensive and time consuming numerical calculations. Here, we present a detailed analysis of the geometrical properties of the ray pattern. This method allows us to minimize the extent of numerical calculations and to create a more efficient code. This work was performed under the auspices of the U.S. Department of Energy by Princeton Plasma Physics Laboratory under contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Aleksandrov, A. P.; Berezovoy, A. N.; Galper, A. M.; Grachev, V. M.; Dmitrenko, V. V.; Kirillov-Ugryumov, V. G.; Lebedev, V. V.; Lyakhov, V. A.; Moiseyev, A. A.; Ulin, S. Y.
1985-09-01
Coding collimators are used to improve the angular resolution of gamma-ray telescopes at energies above 50 MeV. However, the interaction of cosmic rays with the collimation material can lead to the appearance of a gamma-ray background flux which can have a deleterious effect on measurement efficiency. An experiment was performed on the Salyut-6-Soyuz spacecraft system with the Elena-F small-scale gamma-ray telescope in order to measure the magnitude of this background. It is shown that, even at a zenith angle of approximately zero degrees (the angle at which the gamma-ray observations are made), the coding collimator has only an insignificant effect on the background conditions.
NASA Astrophysics Data System (ADS)
Yu, Sha; Evans, Meredydd; Kyle, Page; Vu, Linh; Tan, Qing; Gupta, Ashu; Patel, Pralit
2018-03-01
The Nationally Determined Contributions are allowing countries to examine options for reducing emissions through a range of domestic policies. India, like many developing countries, has committed to reducing emissions through specific policies, including building energy codes. Here we assess the potential of these sectoral policies to help in achieving mitigation targets. Collectively, it is critically important to see the potential impact of such policies across developing countries in meeting national and global emission goals. Buildings accounted for around one third of global final energy use in 2010, and building energy consumption is expected to increase as income grows in developing countries. Using the Global Change Assessment Model, this study finds that implementing a range of energy efficiency policies robustly can reduce total Indian building energy use by 22% and lower total Indian carbon dioxide emissions by 9% in 2050 compared to the business-as-usual scenario. Among various policies, energy codes for new buildings can result in the most significant savings. For all building energy policies, well-coordinated, consistent implementation is critical, which requires coordination across different departments and agencies, improving capacity of stakeholders, and developing appropriate institutions to facilitate policy implementation.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na(+) and K(+) channels, with generator potential and graded potential models lacking voltage-gated Na(+) channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na(+) channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a 'footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation. PMID:24465197
Users manual for updated computer code for axial-flow compressor conceptual design
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.
Annual Performance Evaluation of a Pair of Energy Efficient Houses (WC3 and WC4) in Oak Ridge, TN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Kaushik; Christian, Jeffrey E; Gehl, Anthony C
2012-04-01
Beginning in 2008, two pairs of energy-saver houses were built at Wolf Creek in Oak Ridge, TN. These houses were designed to maximize energy efficiency using new ultra-high-efficiency components emerging from ORNL s Cooperative Research and Development Agreement (CRADA) partners and others. The first two houses contained 3713 square feet of conditioned area and were designated as WC1 and WC2; the second pair consisted of 2721 square feet conditioned area with crawlspace foundation and they re called WC3 and WC4. This report is focused on the annual energy performance of WC3 and WC4, and how they compare against a previouslymore » benchmarked maximum energy efficient house of a similar footprint. WC3 and WC4 are both about 55-60% more efficient than traditional new construction. Each house showcases a different envelope system: WC3 is built with advanced framing featured cellulose insulation partially mixed with phase change materials (PCM); and WC4 house has cladding composed of an exterior insulation and finish system (EIFS). The previously benchmarked house was one of three built at the Campbell Creek subdivision in Knoxville, TN. This house (CC3) was designed as a transformation of a builder house (CC1) with the most advanced energy-efficiency features, including solar electricity and hot water, which market conditions are likely to permit within the 2012 2015 period. The builder house itself was representative of a standard, IECC 2006 code-certified, all-electric house built by the builder to sell around 2005 2008.« less
Lee, Eleanor; Pang, Xiufeng; McNeil, Andrew; ...
2015-05-29
Here, as rapid growth in the construction industry continues to occur in China, the increased demand for a higher standard living is driving significant growth in energy use and demand across the country. Building codes and standards have been implemented to head off this trend, tightening prescriptive requirements for fenestration component measures using methods similar to the US model energy code American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1. The objective of this study is to (a) provide an overview of applicable code requirements and current efforts within China to enable characterization and comparison of window and shadingmore » products, and (b) quantify the load reduction and energy savings potential of several key advanced window and shading systems, given the divergent views on how space conditioning requirements will be met in the future. System-level heating and cooling loads and energy use performance were evaluated for a code-compliant large office building using the EnergyPlus building energy simulation program. Commercially-available, highly-insulating, low-emittance windows were found to produce 24-66% lower perimeter zone HVAC electricity use compared to the mandated energy-efficiency standard in force (GB 50189-2005) in cold climates like Beijing. Low-e windows with operable exterior shading produced up to 30-80% reductions in perimeter zone HVAC electricity use in Beijing and 18-38% reductions in Shanghai compared to the standard. The economic context of China is unique since the cost of labor and materials for the building industry is so low. Broad deployment of these commercially available technologies with the proper supporting infrastructure for design, specification, and verification in the field would enable significant reductions in energy use and greenhouse gas emissions in the near term.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Eleanor; Pang, Xiufeng; McNeil, Andrew
Here, as rapid growth in the construction industry continues to occur in China, the increased demand for a higher standard living is driving significant growth in energy use and demand across the country. Building codes and standards have been implemented to head off this trend, tightening prescriptive requirements for fenestration component measures using methods similar to the US model energy code American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1. The objective of this study is to (a) provide an overview of applicable code requirements and current efforts within China to enable characterization and comparison of window and shadingmore » products, and (b) quantify the load reduction and energy savings potential of several key advanced window and shading systems, given the divergent views on how space conditioning requirements will be met in the future. System-level heating and cooling loads and energy use performance were evaluated for a code-compliant large office building using the EnergyPlus building energy simulation program. Commercially-available, highly-insulating, low-emittance windows were found to produce 24-66% lower perimeter zone HVAC electricity use compared to the mandated energy-efficiency standard in force (GB 50189-2005) in cold climates like Beijing. Low-e windows with operable exterior shading produced up to 30-80% reductions in perimeter zone HVAC electricity use in Beijing and 18-38% reductions in Shanghai compared to the standard. The economic context of China is unique since the cost of labor and materials for the building industry is so low. Broad deployment of these commercially available technologies with the proper supporting infrastructure for design, specification, and verification in the field would enable significant reductions in energy use and greenhouse gas emissions in the near term.« less
Energy Efficient Sparse Connectivity from Imbalanced Synaptic Plasticity Rules
Sacramento, João; Wichert, Andreas; van Rossum, Mark C. W.
2015-01-01
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L 1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum. PMID:26046817
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
Laedermann, Jean-Pascal; Valley, Jean-François; Bulling, Shelley; Bochud, François O
2004-06-01
The detection process used in a commercial dose calibrator was modeled using the GEANT 3 Monte Carlo code. Dose calibrator efficiency for gamma and beta emitters, and the response to monoenergetic photons and electrons was calculated. The model shows that beta emitters below 2.5 MeV deposit energy indirectly in the detector through bremsstrahlung produced in the chamber wall or in the source itself. Higher energy beta emitters (E > 2.5 MeV) deposit energy directly in the chamber sensitive volume, and dose calibrator sensitivity increases abruptly for these radionuclides. The Monte Carlo calculations were compared with gamma and beta emitter measurements. The calculations show that the variation in dose calibrator efficiency with measuring conditions (source volume, container diameter, container wall thickness and material, position of the source within the calibrator) is relatively small and can be considered insignificant for routine measurement applications. However, dose calibrator efficiency depends strongly on the inner-wall thickness of the detector.
NASA Astrophysics Data System (ADS)
Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin
2017-01-01
High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.
NASA Astrophysics Data System (ADS)
Balkcum, Adam J.
In the ubitron, also known as the free electron laser, high power coherent radiation is generated from the interaction of an undulating electron beam with an electromagnetic signal and a static periodic magnetic wiggler field. These devices have experimentally produced high power spanning the microwave to x-ray regimes. Potential applications range from microwave radar to the study of solid state material properties. In this dissertation, the efficient production of high power microwaves (HPM) is investigated for a ubitron employing a coaxial circuit and wiggler. Designs for the particular applications of an advanced high gradient linear accelerator driver and a directed energy source are presented. The coaxial ubitron is inherently suited for the production of HPM. It utilizes an annular electron beam to drive the low loss, RF breakdown resistant TE01 mode of a large coaxial circuit. The device's large cross-sectional area greatly reduces RF wall heat loading and the current density loading at the cathode required to produce the moderate energy (500 keV) but high current (1-10 kA) annular electron beam. Focusing and wiggling of the beam is achieved using coaxial annular periodic permanent magnet (PPM) stacks without a solenoidal guide magnetic field. This wiggler configuration is compact, efficient and can propagate the multi-kiloampere electron beams required for many HPM applications. The coaxial PPM ubitron in a traveling wave amplifier, cavity oscillator and klystron configuration is investigated using linear theory and simulation codes. A condition for the dc electron beam stability in the coaxial wiggler is derived and verified using the 2-1/2 dimensional particle-in-cell code, MAGIC. New linear theories for the cavity start-oscillation current and gain in a klystron are derived. A self-consistent nonlinear theory for the ubitron-TWT and a new nonlinear theory for the ubitron oscillator are presented. These form the basis for simulation codes which, along with MAGIC, are used to design a representative 200 MW, 40% efficient, X-band amplifier for linear accelerators and a 1 GW, 21% efficient, S-band oscillator for directed energy. The technique of axial mode profiling in the ubitron cavity oscillator is also proposed and shown to increase the simulated interaction efficiency to 46%. These devices are realizable and their experimental implementation, including electron beam formation and spurious mode suppression techniques, is discussed.
A hydrodynamic approach to cosmology - Methodology
NASA Technical Reports Server (NTRS)
Cen, Renyue
1992-01-01
The present study describes an accurate and efficient hydrodynamic code for evolving self-gravitating cosmological systems. The hydrodynamic code is a flux-based mesh code originally designed for engineering hydrodynamical applications. A variety of checks were performed which indicate that the resolution of the code is a few cells, providing accuracy for integral energy quantities in the present simulations of 1-3 percent over the whole runs. Six species (H I, H II, He I, He II, He III) are tracked separately, and relevant ionization and recombination processes, as well as line and continuum heating and cooling, are computed. The background radiation field is simultaneously determined in the range 1 eV to 100 keV, allowing for absorption, emission, and cosmological effects. It is shown how the inevitable numerical inaccuracies can be estimated and to some extent overcome.
Hadron polarizability data analysis: GoAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegen, H., E-mail: hkstegen@mta.ca; Hornidge, D.; Collicott, C.
The A2 Collaboration at the Institute for Nuclear Physics in Mainz, Germany, is working towards determining the polarizabilities of hadrons from nonperturbative quantum chromodynamics through Compton scattering experiments at low energies. The asymmetry observables are directly related to the scalar and spin polarizabilities of the hadrons. Online analysis software, which will give real-time feedback on asymmetries, efficiencies, energies, and angle distributions, has been developed. The new software is a big improvement over the existing online code and will greatly develop the quality of the acquired data.
Hadron polarizability data analysis: GoAT
NASA Astrophysics Data System (ADS)
Stegen, H.; Collicott, C.; Hornidge, D.; Martel, P.; Ott, P.
2015-12-01
The A2 Collaboration at the Institute for Nuclear Physics in Mainz, Germany, is working towards determining the polarizabilities of hadrons from nonperturbative quantum chromodynamics through Compton scattering experiments at low energies. The asymmetry observables are directly related to the scalar and spin polarizabilities of the hadrons. Online analysis software, which will give real-time feedback on asymmetries, efficiencies, energies, and angle distributions, has been developed. The new software is a big improvement over the existing online code and will greatly develop the quality of the acquired data.
Recommendations on Implementing the Energy Conservation Building Code in Visakhapatnam, AP, India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Madanagobalane, Samhita S.; Yu, Sha
Visakhapatnam can play an important role in improving energy efficiency in its buildings by implementing ECBC. This document seeks to capture stakeholder recommendations on a road map for implementation, which can help all market players plan for implementation. Visakhapatnam also has an opportunity to serve as a role model for other Smart Cities and cities in general in India. The road map and steps that VUDA adopts to implement ECBC can provide helpful examples to these other cities.
Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taherzadeh, M.
1987-11-13
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation andmore » source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.« less
Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.
Will, Sebastian; Jabbari, Hosna
2016-01-01
RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.
NASA Astrophysics Data System (ADS)
Aleksandrov, A. P.; Berezovoj, A. N.; Gal'Per, A. M.; Grachev, V. M.; Dmitrenko, V. V.; Kirillov-Ugryumov, V. G.; Lebedev, V. V.; Lyakhov, V. A.; Moiseev, A. A.; Ulin, S. E.; Shchvets, N. I.
1984-11-01
Coding collimators are used to improve the angular resolution of gamma-ray telescopes at energies above 50 MeV. However, the interaction of cosmic rays with the collimator material can lead to the appearance of a gramma-ray background flux which can have a deleterious effect on measurement efficiency. An experiment was performed on the Salyut-6-Soyuz spacecraft system with the Elena-F small-scale gamma-ray telescope in order to measure the magnitude of this background. It is shown that, even at a zenith angle of approximately zero degrees (the angle at which the gamma-ray observations are made), the coding collimator has only an insignificant effect on the background conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brower, Richard C.
This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
On Applicability of Network Coding Technique for 6LoWPAN-based Sensor Networks.
Amanowicz, Marek; Krygier, Jaroslaw
2018-05-26
In this paper, the applicability of the network coding technique in 6LoWPAN-based sensor multihop networks is examined. The 6LoWPAN is one of the standards proposed for the Internet of Things architecture. Thus, we can expect the significant growth of traffic in such networks, which can lead to overload and decrease in the sensor network lifetime. The authors propose the inter-session network coding mechanism that can be implemented in resource-limited sensor motes. The solution reduces the overall traffic in the network, and in consequence, the energy consumption is decreased. Used procedures take into account deep header compressions of the native 6LoWPAN packets and the hop-by-hop changes of the header structure. Applied simplifications reduce signaling traffic that is typically occurring in network coding deployments, keeping the solution usefulness for the wireless sensor networks with limited resources. The authors validate the proposed procedures in terms of end-to-end packet delay, packet loss ratio, traffic in the air, total energy consumption, and network lifetime. The solution has been tested in a real wireless sensor network. The results confirm the efficiency of the proposed technique, mostly in delay-tolerant sensor networks.
NASA Astrophysics Data System (ADS)
Vijayakumar, Ganesh; Sprague, Michael
2017-11-01
Demonstrating expected convergence rates with spatial- and temporal-grid refinement is the ``gold standard'' of code and algorithm verification. However, the lack of analytical solutions and generating manufactured solutions presents challenges for verifying codes for complex systems. The application of the method of manufactured solutions (MMS) for verification for coupled multi-physics phenomena like fluid-structure interaction (FSI) has only seen recent investigation. While many FSI algorithms for aeroelastic phenomena have focused on boundary-resolved CFD simulations, the actuator-line representation of the structure is widely used for FSI simulations in wind-energy research. In this work, we demonstrate the verification of an FSI algorithm using MMS for actuator-line CFD simulations with a simplified structural model. We use a manufactured solution for the fluid velocity field and the displacement of the SMD system. We demonstrate the convergence of both the fluid and structural solver to second-order accuracy with grid and time-step refinement. This work was funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Tammie Renee; Tretiak, Sergei
2017-01-06
Understanding and controlling excited state dynamics lies at the heart of all our efforts to design photoactive materials with desired functionality. This tailor-design approach has become the standard for many technological applications (e.g., solar energy harvesting) including the design of organic conjugated electronic materials with applications in photovoltaic and light-emitting devices. Over the years, our team has developed efficient LANL-based codes to model the relevant photophysical processes following photoexcitation (spatial energy transfer, excitation localization/delocalization, and/or charge separation). The developed approach allows the non-radiative relaxation to be followed on up to ~10 ps timescales for large realistic molecules (hundreds of atomsmore » in size) in the realistic solvent dielectric environment. The Collective Electronic Oscillator (CEO) code is used to compute electronic excited states, and the Non-adiabatic Excited State Molecular Dynamics (NA-ESMD) code is used to follow the non-adiabatic dynamics on multiple coupled Born-Oppenheimer potential energy surfaces. Our preliminary NA-ESMD simulations have revealed key photoinduced mechanisms controlling competing interactions and relaxation pathways in complex materials, including organic conjugated polymer materials, and have provided a detailed understanding of photochemical products and intermediates and the internal conversion process during the initiation of energetic materials. This project will be using LANL-based CEO and NA-ESMD codes to model nonradiative relaxation in organic and energetic materials. The NA-ESMD and CEO codes belong to a class of electronic structure/quantum chemistry codes that require large memory, “long-queue-few-core” distribution of resources in order to make useful progress. The NA-ESMD simulations are trivially parallelizable requiring ~300 processors for up to one week runtime to reach a meaningful restart point.« less
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
A stimulus-dependent spike threshold is an optimal neural coder
Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama
2015-01-01
A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
NASA Astrophysics Data System (ADS)
Vincenti, H.; Lobet, M.; Lehe, R.; Sasanka, R.; Vay, J.-L.
2017-01-01
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈ 20 pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scatter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of × 2 to × 2.5 speed-up in double precision for particle shape factor of orders 1- 3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles).
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
Ghosh, Arindam; Lee, Jae-Won; Cho, Ho-Shin
2013-01-01
Due to its efficiency, reliability and better channel and resource utilization, cooperative transmission technologies have been attractive options in underwater as well as terrestrial sensor networks. Their performance can be further improved if merged with forward error correction (FEC) techniques. In this paper, we propose and analyze a retransmission protocol named Cooperative-Hybrid Automatic Repeat reQuest (C-HARQ) for underwater acoustic sensor networks, which exploits both the reliability of cooperative ARQ (CARQ) and the efficiency of incremental redundancy-hybrid ARQ (IR-HARQ) using rate-compatible punctured convolution (RCPC) codes. Extensive Monte Carlo simulations are performed to investigate the performance of the protocol, in terms of both throughput and energy efficiency. The results clearly reveal the enhancement in performance achieved by the C-HARQ protocol, which outperforms both CARQ and conventional stop and wait ARQ (S&W ARQ). Further, using computer simulations, optimum values of various network parameters are estimated so as to extract the best performance out of the C-HARQ protocol. PMID:24217359
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taleei, R; Qin, N; Jiang, S
2016-06-15
Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relativemore » Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.« less
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-08-08
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.
Colorado Better Buildings Project. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strife, Susie; Yancey, Lea
The Colorado Better Buildings project intended to bring new and existing energy efficiency model programs to market with regional collaboration and funding partnerships. The goals for Boulder County and its program partners were to advance energy efficiency investments, stimulate economic growth in Colorado and advance the state’s energy independence. Collectively, three counties set out to complete 9,025 energy efficiency upgrades in 2.5 years and they succeeded in doing so. Energy efficiency upgrades have been completed in more than 11,000 homes and businesses in these communities. Boulder County and its partners received a $25 million BetterBuildings grant from the U.S. Departmentmore » of Energy under the American Recovery and Reinvestment Act in the summer of 2010. This was also known as the Energy Efficiency and Conservation Block Grants program. With this funding, Boulder County, the City and County of Denver, and Garfield County set out to design programs for the residential and commercial sectors to overcome key barriers in the energy upgrade process. Since January 2011, these communities have paired homeowners and business owners with an Energy Advisor – an expert to help move from assessment to upgrade with minimal hassle. Pairing this step-by-step assistance with financing incentives has effectively addressed many key barriers, resulting in energy efficiency improvements and happy customers. An expert energy advisor guides the building owner through every step of the process, coordinating the energy assessment, interpreting results for a customized action plan, providing a list of contractors, and finding and applying for all available rebates and low-interest loans. In addition to the expert advising and financial incentives, the programs also included elements of social marketing, technical assistance, workforce development and contractor trainings, project monitoring and verification, and a cloud-based customer data system to coordinate among field advisors and across local governments and local service vendors. A portion of the BetterBuildings grant went to the Metro Mayors Caucus (MMC) who worked in partnership with the Denver Regional Council of Governments (DRCOG) to conduct a series of 10 energy efficiency workshops for local government officials and other interested parties. The workshops helped showcase lessons learned on energy efficiency and helped guide other local governments in the establishment of similar programs. The workshops covered a wide range of energy efficiency and renewable energy topics such as clean energy finance, social mobilization and communications, specific case studies of Colorado towns, energy efficiency codes, net zero buildings and solar power. Since the programs launched in January 2011, these communities have collectively spurred economic investments in energy efficiency, achieved greater than 5:1 leveraging of grant funds, saved energy and reduced greenhouse gas emissions, provided trainings for a robust local energy contractor network, and proved out viable and replicable program models that local utilities and other communities are adopting, with long lasting market transformation.« less
A comparison of models for supernova remnants including cosmic rays
NASA Astrophysics Data System (ADS)
Kang, Hyesung; Drury, L. O'C.
1992-11-01
A simplified model which can follow the dynamical evolution of a supernova remnant including the acceleration of cosmic rays without carrying out full numerical simulations has been proposed by Drury, Markiewicz, & Voelk in 1989. To explore the accuracy and the merits of using such a model, we have recalculated with the simplified code the evolution of the supernova remnants considered in Jones & Kang, in which more detailed and accurate numerical simulations were done using a full hydrodynamic code based on the two-fluid approximation. For the total energy transferred to cosmic rays the two codes are in good agreement, the acceleration efficiency being the same within a factor of 2 or so. The dependence of the results of the two codes on the closure parameters for the two-fluid approximation is also qualitatively similar. The agreement is somewhat degraded in those cases where the shock is smoothed out by the cosmic rays.
Status Report on NEAMS PROTEUS/ORIGEN Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A
2016-02-18
The US Department of Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program has contributed significantly to the development of the PROTEUS neutron transport code at Argonne National Laboratory and to the Oak Ridge Isotope Generation and Depletion Code (ORIGEN) depletion/decay code at Oak Ridge National Laboratory. PROTEUS’s key capability is the efficient and scalable (up to hundreds of thousands of cores) neutron transport solver on general, unstructured, three-dimensional finite-element-type meshes. The scalability and mesh generality enable the transfer of neutron and power distributions to other codes in the NEAMS toolkit for advanced multiphysics analysis. Recently, ORIGEN has received considerablemore » modernization to provide the high-performance depletion/decay capability within the NEAMS toolkit. This work presents a description of the initial integration of ORIGEN in PROTEUS, mainly performed during FY 2015, with minor updates in FY 2016.« less
ALPHACAL: A new user-friendly tool for the calibration of alpha-particle sources.
Timón, A Fernández; Vargas, M Jurado; Gallardo, P Álvarez; Sánchez-Oro, J; Peralta, L
2018-05-01
In this work, we present and describe the program ALPHACAL, specifically developed for the calibration of alpha-particle sources. It is therefore more user-friendly and less time-consuming than multipurpose codes developed for a wide range of applications. The program is based on the recently developed code AlfaMC, which simulates specifically the transport of alpha particles. Both cylindrical and point sources mounted on the surface of polished backings can be simulated, as is the convention in experimental measurements of alpha-particle sources. In addition to the efficiency calculation and determination of the backscattering coefficient, some additional tools are available to the user, like the visualization of energy spectrum, use of energy cut-off or low-energy tail corrections. ALPHACAL has been implemented in C++ language using QT library, so it is available for Windows, MacOs and Linux platforms. It is free and can be provided under request to the authors. Copyright © 2018 Elsevier Ltd. All rights reserved.
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
Latent uncertainties of the precalculated track Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less
Latent uncertainties of the precalculated track Monte Carlo method.
Renaud, Marc-André; Roberge, David; Seuntjens, Jan
2015-01-01
While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.
Rep. Grayson, Alan [D-FL-9
2014-01-28
House - 01/28/2014 Referred to the House Committee on Ways and Means. (All Actions) Notes: For further action, see H.R.5771, which became Public Law 113-295 on 12/19/2014. Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Rep. Grayson, Alan [D-FL-9
2014-01-28
House - 01/28/2014 Referred to the House Committee on Ways and Means. (All Actions) Notes: For further action, see H.R.5771, which became Public Law 113-295 on 12/19/2014. Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Strategies and Challenges for Energy Efficient Retrofitting: Study of the Empire State Building
NASA Astrophysics Data System (ADS)
De, B.; Mukherjee, M.
2013-11-01
Operational and maintenance cost of existing buildings is escalating making it tough for both the owner and the tenants. Retrofitting them with state of the art technologies help them to keep pace with amended recent code provisions and thus extending the older building stocks one more chance to live responsively. Retrofitted iconic buildings can thus retain their status in commerce driven real estate sector. It helps in reducing green house gas emission as well. World's iconic skyscraper, the Empire State Building (ESB), has undergone an exemplary retrofit process since 2008 to reduce its energy demands. To achieve the goal of operational cost and energy consumption reduction, stiff challenges had taken care in a systematic manner to realize benefit throughout the entire lifespan of the ESB. Least disturbances to the tenant and on-site component handling strategies required precise planning. The present paper explores strategies and process adopted for retrofitting the ESB, and derived insightful guidelines towards operational cost savings and energy efficiency of existing buildings through retrofitting.
Strategies to Save 50% Site Energy in Grocery and General Merchandise Stores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirsch, A.; Hale, E.; Leach, M.
2011-03-01
This paper summarizes the methodology and main results of two recently published Technical Support Documents. These reports explore the feasibility of designing general merchandise and grocery stores that use half the energy of a minimally code-compliant building, as measured on a whole-building basis. We used an optimization algorithm to trace out a minimum cost curve and identify designs that satisfy the 50% energy savings goal. We started from baseline building energy use and progressed to more energy-efficient designs by sequentially adding energy design measures (EDMs). Certain EDMs figured prominently in reaching the 50% energy savings goal for both building types:more » (1) reduced lighting power density; (2) optimized area fraction and construction of view glass or skylights, or both, as part of a daylighting system tuned to 46.5 fc (500 lux); (3) reduced infiltration with a main entrance vestibule or an envelope air barrier, or both; and (4) energy recovery ventilators, especially in humid and cold climates. In grocery stores, the most effective EDM, which was chosen for all climates, was replacing baseline medium-temperature refrigerated cases with high-efficiency models that have doors.« less
A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks
Costa, Daniel G.; Guedes, Luiz Affonso
2011-01-01
Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Single event upsets in semiconductor devices induced by highly ionising particles.
Sannikov, A V
2004-01-01
A new model of single event upsets (SEUs), created in memory cells by heavy ions and high energy hadrons, has been developed. The model takes into account the spatial distribution of charge collection efficiency over the cell area not considered in previous approaches. Three-dimensional calculations made by the HADRON code have shown good agreement with experimental data for the energy dependence of proton SEU cross sections, sensitive depths and other SEU observables. The model is promising for prediction of SEU rates for memory chips exposed in space and in high-energy experiments as well as for the development of a high-energy neutron dosemeter based on the SEU effect.
Plessow, Philipp N
2018-02-13
This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.
Symmetry-Based Variance Reduction Applied to 60Co Teletherapy Unit Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Sheikh-Bagheri, D.
A new variance reduction technique (VRT) is implemented in the BEAM code [1] to specifically improve the efficiency of calculating penumbral distributions of in-air fluence profiles calculated for isotopic sources. The simulations focus on 60Co teletherapy units. The VRT includes splitting of photons exiting the source capsule of a 60Co teletherapy source according to a splitting recipe and distributing the split photons randomly on the periphery of a circle, preserving the direction cosine along the beam axis, in addition to the energy of the photon. It is shown that the use of the VRT developed in this work can lead to a 6-9 fold improvement in the efficiency of the penumbral photon fluence of a 60Co beam compared to that calculated using the standard optimized BEAM code [1] (i.e., one with the proper selection of electron transport parameters).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Pamala C.; Richman, Eric E.
2008-09-01
Feeling dim from energy code confusion? Read on to give your inspections a charge. The U.S. Department of Energy’s Building Energy Codes Program addresses hundreds of inquiries from the energy codes community every year. This article offers clarification for topics of confusion submitted to BECP Technical Support of interest to electrical inspectors, focusing on the residential and commercial energy code requirements based on the most recently published 2006 International Energy Conservation Code® and ANSI/ASHRAE/IESNA1 Standard 90.1-2004.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
ogs6 - a new concept for porous-fractured media simulations
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Bilke, Lars; Fischer, Thomas; Rink, Karsten; Wang, Wenqing; Watanabe, Norihiro; Kolditz, Olaf
2015-04-01
OpenGeoSys (OGS) is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THMC) processes in porous and fractured media, continuously developed since the mid-eighties. The basic concept is to provide a flexible numerical framework for solving coupled multi-field problems. OGS is targeting mainly on applications in environmental geoscience, e.g. in the fields of contaminant hydrology, water resources management, waste deposits, or geothermal energy systems, but it has also been successfully applied to new topics in energy storage recently. OGS is actively participating several international benchmarking initiatives, e.g. DECOVALEX (waste management), CO2BENCH (CO2 storage and sequestration), SeSBENCH (reactive transport processes) and HM-Intercomp (coupled hydrosystems). Despite the broad applicability of OGS in geo-, hydro- and energy-sciences, several shortcomings became obvious concerning the computational efficiency as well as the code structure became too sophisticated for further efficient development. OGS-5 was designed for object-oriented FEM applications. However, in many multi-field problems a certain flexibility of tailored numerical schemes is essential. Therefore, a new concept was designed to overcome existing bottlenecks. The paradigms for ogs6 are: - Flexibility of numerical schemes (FEM#FVM#FDM), - Computational efficiency (PetaScale ready), - Developer- and user-friendly. ogs6 has a module-oriented architecture based on thematic libraries (e.g. MeshLib, NumLib) on the large scale and uses object-oriented approach for the small scale interfaces. Usage of a linear algebra library (Eigen3) for the mathematical operations together with the ISO C++11 standard increases the expressiveness of the code and makes it more developer-friendly. The new C++ standard also makes the template meta-programming technique code used for compile-time optimizations more compact. We have transitioned the main code development to the GitHub code hosting system (https://github.com/ufz/ogs). The very flexible revision control system Git in combination with issue tracking, developer feedback and the code review options improve the code quality and the development process in general. The continuous testing procedure of the benchmarks as it was established for OGS-5 is maintained. Additionally unit testing, which is automatically triggered by any code changes, is executed by two continuous integration frameworks (Jenkins CI, Travis CI) which build and test the code on different operating systems (Windows, Linux, Mac OS), in multiple configurations and with different compilers (GCC, Clang, Visual Studio). To improve the testing possibilities further, XML based file input formats are introduced helping with automatic validation of the user contributed benchmarks. The first ogs6 prototype version 6.0.1 has been implemented for solving generic elliptic problems. Next steps are envisaged to transient, non-linear and coupled problems. Literature: [1] Kolditz O, Shao H, Wang W, Bauer S (eds) (2014): Thermo-Hydro-Mechanical-Chemical Processes in Fractured Porous Media: Modelling and Benchmarking - Closed Form Solutions. In: Terrestrial Environmental Sciences, Vol. 1, Springer, Heidelberg, ISBN 978-3-319-11893-2, 315pp. http://www.springer.com/earth+sciences+and+geography/geology/book/978-3-319-11893-2 [2] Naumov D (2015): Computational Fluid Dynamics in Unconsolidated Sediments: Model Generation and Discrete Flow Simulations, PhD thesis, Technische Universität Dresden.
Dark matter annihilation in the circumgalactic medium at high redshifts
NASA Astrophysics Data System (ADS)
Schön, S.; Mack, K. J.; Wyithe, J. S. B.
2018-03-01
Annihilating dark matter (DM) models offer promising avenues for future DM detection, in particular via modification of astrophysical signals. However, when modelling such potential signals at high redshift, the emergence of both DM and baryonic structure, as well as the complexities of the energy transfer process, needs to be taken into account. In the following paper, we present a detailed energy deposition code and use this to examine the energy transfer efficiency of annihilating DM at high redshift, including the effects on baryonic structure. We employ the PYTHIA code to model neutralino-like DM candidates and their subsequent annihilation products for a range of masses and annihilation channels. We also compare different density profiles and mass-concentration relations for 105-107 M⊙ haloes at redshifts 20 and 40. For these DM halo and particle models, we show radially dependent ionization and heating curves and compare the deposited energy to the haloes' gravitational binding energy. We use the `filtered' annihilation spectra escaping the halo to calculate the heating of the circumgalactic medium and show that the mass of the minimal star-forming object is increased by a factor of 2-3 at redshift 20 and 4-5 at redshift 40 for some DM models.
Computer modeling of pulsed CO2 lasers for lidar applications
NASA Technical Reports Server (NTRS)
Spiers, Gary D.; Smithers, Martin E.; Murty, Rom
1991-01-01
The experimental results will enable a comparison of the numerical code output with experimental data. This will ensure verification of the validity of the code. The measurements were made on a modified commercial CO2 laser. Results are listed as following. (1) The pulse shape and energy dependence on gas pressure were measured. (2) The intrapulse frequency chirp due to plasma and laser induced medium perturbation effects were determined. A simple numerical model showed quantitative agreement with these measurements. The pulse to pulse frequency stability was also determined. (3) The dependence was measured of the laser transverse mode stability on cavity length. A simple analysis of this dependence in terms of changes to the equivalent fresnel number and the cavity magnification was performed. (4) An analysis was made of the discharge pulse shape which enabled the low efficiency of the laser to be explained in terms of poor coupling of the electrical energy into the vibrational levels. And (5) the existing laser resonator code was changed to allow it to run on the Cray XMP under the new operating system.
University of Arizona High Energy Physics Program at the Cosmic Frontier 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
abate, alex; cheu, elliott
This is the final technical report from the University of Arizona High Energy Physics program at the Cosmic Frontier covering the period 2014-2016. The work aims to advance the understanding of dark energy using the Large Synoptic Survey Telescope (LSST). Progress on the engineering design of the power supplies for the LSST camera is discussed. A variety of contributions to photometric redshift measurement uncertainties were studied. The effect of the intergalactic medium on the photometric redshift of very distant galaxies was evaluated. Computer code was developed realizing the full chain of calculations needed to accurately and efficiently run large-scale simulations.
Cavity-enhanced eigenmode and angular hybrid multiplexing in holographic data storage systems.
Miller, Bo E; Takashima, Yuzuru
2016-12-26
Resonant optical cavities have been demonstrated to improve energy efficiencies in Holographic Data Storage Systems (HDSS). The orthogonal reference beams supported as cavity eigenmodes can provide another multiplexing degree of freedom to push storage densities toward the limit of 3D optical data storage. While keeping the increased energy efficiency of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modification of current angular multiplexing HDSS.
Towards Efficient Wireless Body Area Network Using Two-Way Relay Cooperation.
Waheed, Maham; Ahmad, Rizwan; Ahmed, Waqas; Drieberg, Micheal; Alam, Muhammad Mahtab
2018-02-13
The fabrication of lightweight, ultra-thin, low power and intelligent body-borne sensors leads to novel advances in wireless body area networks (WBANs). Depending on the placement of the nodes, it is characterized as in/on body WBAN; thus, the channel is largely affected by body posture, clothing, muscle movement, body temperature and climatic conditions. The energy resources are limited and it is not feasible to replace the sensor's battery frequently. In order to keep the sensor in working condition, the channel resources should be reserved. The lifetime of the sensor is very crucial and it highly depends on transmission among sensor nodes and energy consumption. The reliability and energy efficiency in WBAN applications play a vital role. In this paper, the analytical expressions for energy efficiency (EE) and packet error rate (PER) are formulated for two-way relay cooperative communication. The results depict better reliability and efficiency compared to direct and one-way relay communication. The effective performance range of direct vs. cooperative communication is separated by a threshold distance. Based on EE calculations, an optimal packet size is observed that provides maximum efficiency over a certain link length. A smart and energy efficient system is articulated that utilizes all three communication modes, namely direct, one-way relay and two-way relay, as the direct link performs better for a certain range, but the cooperative communication gives better results for increased distance in terms of EE. The efficacy of the proposed hybrid scheme is also demonstrated over a practical quasi-static channel. Furthermore, link length extension and diversity is achieved by joint network-channel (JNC) coding the cooperative link.
Towards Efficient Wireless Body Area Network Using Two-Way Relay Cooperation
Waheed, Maham; Ahmad, Rizwan; Ahmed, Waqas
2018-01-01
The fabrication of lightweight, ultra-thin, low power and intelligent body-borne sensors leads to novel advances in wireless body area networks (WBANs). Depending on the placement of the nodes, it is characterized as in/on body WBAN; thus, the channel is largely affected by body posture, clothing, muscle movement, body temperature and climatic conditions. The energy resources are limited and it is not feasible to replace the sensor’s battery frequently. In order to keep the sensor in working condition, the channel resources should be reserved. The lifetime of the sensor is very crucial and it highly depends on transmission among sensor nodes and energy consumption. The reliability and energy efficiency in WBAN applications play a vital role. In this paper, the analytical expressions for energy efficiency (EE) and packet error rate (PER) are formulated for two-way relay cooperative communication. The results depict better reliability and efficiency compared to direct and one-way relay communication. The effective performance range of direct vs. cooperative communication is separated by a threshold distance. Based on EE calculations, an optimal packet size is observed that provides maximum efficiency over a certain link length. A smart and energy efficient system is articulated that utilizes all three communication modes, namely direct, one-way relay and two-way relay, as the direct link performs better for a certain range, but the cooperative communication gives better results for increased distance in terms of EE. The efficacy of the proposed hybrid scheme is also demonstrated over a practical quasi-static channel. Furthermore, link length extension and diversity is achieved by joint network-channel (JNC) coding the cooperative link. PMID:29438278
Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs
NASA Astrophysics Data System (ADS)
RIngenburg, Michael F.
Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
Liang, Taiee; Bauer, Johannes M.; Liu, James C.; ...
2016-12-01
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers
NASA Astrophysics Data System (ADS)
Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.
1992-01-01
Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
Modeling multi-GeV class laser-plasma accelerators with INF&RNO
NASA Astrophysics Data System (ADS)
Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim
2016-10-01
Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Taiee; Bauer, Johannes M.; Liu, James C.
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
Computer code for the optimization of performance parameters of mixed explosive formulations.
Muthurajan, H; Sivabalan, R; Talawar, M B; Venugopalan, S; Gandhe, B R
2006-08-25
LOTUSES is a novel computer code, which has been developed for the prediction of various thermodynamic properties such as heat of formation, heat of explosion, volume of explosion gaseous products and other related performance parameters. In this paper, we report LOTUSES (Version 1.4) code which has been utilized for the optimization of various high explosives in different combinations to obtain maximum possible velocity of detonation. LOTUSES (Version 1.4) code will vary the composition of mixed explosives automatically in the range of 1-100% and computes the oxygen balance as well as the velocity of detonation for various compositions in preset steps. Further, the code suggests the compositions for which least oxygen balance and the higher velocity of detonation could be achieved. Presently, the code can be applied for two component explosive compositions. The code has been validated with well-known explosives like, TNT, HNS, HNF, TATB, RDX, HMX, AN, DNA, CL-20 and TNAZ in different combinations. The new algorithm incorporated in LOTUSES (Version 1.4) enhances the efficiency and makes it a more powerful tool for the scientists/researches working in the field of high energy materials/hazardous materials.
NASA Astrophysics Data System (ADS)
Tarditi, Alfonso G.; Shebalin, John V.
2002-11-01
A simulation study with the NIMROD code [1] is being carried on to investigate the efficiency of the thrust generation process and the properties of the plasma detachment in a magnetic nozzle. In the simulation, hot plasma is injected in the magnetic nozzle, modeled as a 2D, axi-symmetric domain. NIMROD has two-fluid, 3D capabilities but the present runs are being conducted within the MHD, 2D approximation. As the plasma travels through the magnetic field, part of its thermal energy is converted into longitudinal kinetic energy, along the axis of the nozzle. The plasma eventually detaches from the magnetic field at a certain distance from the nozzle throat where the kinetic energy becomes larger than the magnetic energy. Preliminary NIMROD 2D runs have been benchmarked with a particle trajectory code showing satisfactory results [2]. Further testing is here reported with the emphasis on the analysis of the diffusion rate across the field lines and of the overall nozzle efficiency. These simulation runs are specifically designed for obtaining comparisons with laboratory measurements of the VASIMR experiment, by looking at the evolution of the radial plasma density and temperature profiles in the nozzle. VASIMR (Variable Specific Impulse Magnetoplasma Rocket, [3]) is an advanced space propulsion concept currently under experimental development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A plasma (typically ionized Hydrogen or Helium) is generated by a RF (Helicon) discharge and heated by an Ion Cyclotron Resonance Heating antenna. The heated plasma is then guided into a magnetic nozzle to convert the thermal plasma energy into effective thrust. The VASIMR system has no electrodes and a solenoidal magnetic field produced by an asymmetric mirror configuration ensures magnetic insulation of the plasma from the material surfaces. By powering the plasma source and the heating antenna at different levels it is possible to vary smoothly of the thrust-to-specific impulse ratio while maintaining maximum power utilization. [1] http://www.nimrodteam.org [2] A. V. Ilin et al., Proc. 40th AIAA Aerospace Sciences Meeting, Reno, NV, Jan. 2002 [3] F. R. Chang-Diaz, Scientific American, p. 90, Nov. 2000
77 FR 29322 - Updating State Residential Building Energy Efficiency Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-17
... supporting the change to the SHGC requirements in climate zone 4. Specifically, RECA supported the... to change Climate Zone 3 from R13 to either R20 or R13+5 ci.'' (CFEC, No. 2 at p. 2) In response, DOE... difference of 50 Pascals (5 ACH50) in climate zone 1 and climate zone 2; and 3 air changes/hour (3 ACH50) in...
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.
Optimization of the Efficiency of a Neutron Detector to Measure (α, n) Reaction Cross-Section
NASA Astrophysics Data System (ADS)
Perello, Jesus; Montes, Fernando; Ahn, Tony; Meisel, Zach; Joint InstituteNuclear Astrophysics Team
2015-04-01
Nucleosynthesis, the origin of elements, is one of the greatest mysteries in physics. A recent particular nucleosynthesis process of interest is the charge-particle process (cpp). In the cpp, elements form by nuclear fusion reactions during supernovae. This process of nuclear fusion, (α,n), will be studied by colliding beam elements produced and accelerated at the National Superconducting Cyclotron Laboratory (NSCL) to a helium-filled cell target. The elements will fuse with α (helium nuclei) and emit neutrons during the reaction. The neutrons will be detected for a count of fused-elements, thus providing us the probability of such reactions. The neutrons will be detected using the Neutron Emission Ratio Observer (NERO). Currently, NERO's efficiency varies for neutrons at the expected energy range (0-12 MeV). To study (α,n), NERO's efficiency must be near-constant at these energies. Monte-Carlo N-Particle Transport Code (MCNP6), a software package that simulates nuclear processes, was used to optimize NERO configuration for the experiment. MCNP6 was used to simulate neutron interaction with different NERO configurations at the expected neutron energies. By adding additional 3He detectors and polyethylene, a near-constant efficiency at these energies was obtained in the simulations. With the new NERO configuration, study of the (α,n) reactions can begin, which may explain how elements are formed in the cpp. SROP MSU, NSF, JINA, McNair Society.
NASA Astrophysics Data System (ADS)
Kemp, G. E.; Colvin, J. D.; Fournier, K. B.; May, M. J.; Barrios, M. A.; Patel, M. V.; Scott, H. A.; Marinak, M. M.
2015-05-01
Tailored, high-flux, multi-keV x-ray sources are desirable for studying x-ray interactions with matter for various civilian, space and military applications. For this study, we focus on designing an efficient laser-driven non-local thermodynamic equilibrium 3-5 keV x-ray source from photon-energy-matched Ar K-shell and Ag L-shell targets at sub-critical densities (˜nc/10) to ensure supersonic, volumetric laser heating with minimal losses to kinetic energy, thermal x rays and laser-plasma instabilities. Using Hydra, a multi-dimensional, arbitrary Lagrangian-Eulerian, radiation-hydrodynamics code, we performed a parameter study by varying initial target density and laser parameters for each material using conditions readily achievable on the National Ignition Facility (NIF) laser. We employ a model, benchmarked against Kr data collected on the NIF, that uses flux-limited Lee-More thermal conductivity and multi-group implicit Monte-Carlo photonics with non-local thermodynamic equilibrium, detailed super-configuration accounting opacities from Cretin, an atomic-kinetics code. While the highest power laser configurations produced the largest x-ray yields, we report that the peak simulated laser to 3-5 keV x-ray conversion efficiencies of 17.7% and 36.4% for Ar and Ag, respectively, occurred at lower powers between ˜100-150 TW. For identical initial target densities and laser illumination, the Ag L-shell is observed to have ≳10× higher emissivity per ion per deposited laser energy than the Ar K-shell. Although such low-density Ag targets have not yet been demonstrated, simulations of targets fabricated using atomic layer deposition of Ag on silica aerogels (˜20% by atomic fraction) suggest similar performance to atomically pure metal foams and that either fabrication technique may be worth pursuing for an efficient 3-5 keV x-ray source on NIF.
10 CFR 434.99 - Explanation of numbering system for codes.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...
10 CFR 434.99 - Explanation of numbering system for codes.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...
Potential Job Creation in Rhode Island as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Minnesota as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Tennessee as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Nevada as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Using Third-Party Inspectors in Building Energy Codes Enforcement in India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Evans, Meredydd; Kumar, Pradeep
India is experiencing fast income growth and urbanization, and this leads to unprecedented increases in demand for building energy services and resulting energy consumption. In response to rapid growth in building energy use, the Government of India issued the Energy Conservation Building Code (ECBC) in 2007, which is consistent with and based on the 2001 Energy Conservation Act. ECBC implementation has been voluntary since its enactment and a few states have started to make progress towards mandatory implementation. Rajasthan is the first state in India to adopt ECBC as a mandatory code. The State adopted ECBC with minor additions onmore » March 28, 2011 through a stakeholder process; it became mandatory in Rajasthan on September 28, 2011. Tamil Nadu, Gujarat, and Andhra Pradesh have started to draft an implementation roadmap and build capacity for its implementation. The Bureau of Energy Efficiency (BEE) plans to encourage more states to adopt ECBC in the near future, including Haryana, Uttar Pradesh, Karnataka, Maharashtra, West Bengal, and Delhi. Since its inception, India has applied the code on a voluntary basis, but the Government of India is developing a strategy to mandate compliance. Implementing ECBC requires coordination between the Ministry of Power and the Ministry of Urban Development at the national level as well as interdepartmental coordination at the state level. One challenge is that the Urban Local Bodies (ULBs), the enforcement entities of building by-laws, lack capacity to implement ECBC effectively. For example, ULBs in some states might find the building permitting procedures to be too complex; in other cases, lack of awareness and technical knowledge on ECBC slows down the amendment of local building by-laws as well as ECBC implementation. The intent of this white paper is to share with Indian decision-makers code enforcement approaches: through code officials, third-party inspectors, or a hybrid approach. Given the limited capacity and human resources available in the state and local governments, involving third-party inspectors could rapidly expand the capacity for plan reviews and broad implementation. However, the procedures of involving third-parties need to be carefully designed in order to guarantee a fair process. For example, there should be multiple checks and certification requirements for third-party inspectors, and the government should have the final approval when third-party inspectors are used in a project. This paper discusses different approaches of involving third-parties in ECBC enforcement; the Indian states may choose the approaches that work best in their given circumstances.« less
Whitmyre, Gary K; Pandian, Muhilan D
2018-06-01
Use of vent-free gas heating appliances for supplemental heating in U.S. homes is increasing. However, there is currently a lack of information on the potential impact of these appliances on indoor air quality for homes constructed according to energy-efficient and green building standards. A probabilistic analysis was conducted to estimate the impact of vent-free gas heating appliances on indoor air concentrations of carbon monoxide (CO), nitrogen dioxide (NO 2 ), carbon dioxide (CO 2 ), water vapor, and oxygen in "tight" energy-efficient homes in the United States. A total of 20,000 simulations were conducted for each Department of Energy (DOE) heating region to capture a wide range of home sizes, appliance features, and conditions, by varying a number of parameters, e.g., room volume, house volume, outdoor humidity, air exchange rates, appliance input rates (Btu/hr), and house heat loss factors. Predicted airborne levels of CO were below the U.S. Environmental Protection Agency (EPA) standard of 9 ppm for all modeled cases. The airborne concentrations of NO 2 were below the U.S. Consumer Product Safety Commission (CPSC) guideline of 0.3 ppm and the Health Canada benchmark of 0.25 ppm in all cases and were below the World Health Organization (WHO) standard of 0.11 ppm in 99-100% of all cases. Predicted levels of CO 2 were below the Health Canada standard of 3500 ppm for all simulated cases. Oxygen levels in the room of vent-free heating appliance use were not significantly reduced. The great majority of cases in all DOE regions were associated with relative humidity (RH) levels from all indoor water vapor sources that were less than the EPA-recommended 70% RH maximum to avoid active mold and mildew growth. The conclusion of this investigation is that when installed in accordance with the manufacturer's instructions, vent-free gas heating appliances maintain acceptable indoor air quality in tight energy-efficient homes, as defined by the standards referenced in this report. Probabilistic modeling of indoor air concentrations of carbon monoxide (CO), nitrogen dioxide (NO 2 ), carbon dioxide (CO 2 ), water vapor, and oxygen associated with use of vent-free gas heating appliances provides new data indicating that uses of these devices are consistent with acceptable indoor air quality in "tight" energy-efficient homes in the United States. This study will provide authoritative bodies such as the International Code Council with definitive information that will assist in the development of future versions of national building codes, and will provide evaluation of the performance of unvented gas heating products in energy conservation homes.
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
SIMULATIONS OF BOOSTER INJECTION EFFICIENCY FOR THE APS-UPGRADE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvey, J.; Borland, M.; Harkay, K.
2017-06-25
The APS-Upgrade will require the injector chain to provide high single bunch charge for swap-out injection. One possible limiting factor to achieving this is an observed reduction of injection efficiency into the booster synchrotron at high charge. We have simulated booster injection using the particle tracking code elegant, including a model for the booster impedance and beam loading in the RF cavities. The simulations point to two possible causes for reduced efficiency: energy oscillations leading to losses at high dispersion locations, and a vertical beam size blowup caused by ions in the Particle Accumulator Ring. We also show that themore » efficiency is much higher in an alternate booster lattice with smaller vertical beta function and zero dispersion in the straight sections.« less
Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison
NASA Astrophysics Data System (ADS)
van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder
2000-04-01
Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.
High Efficiency and Low Cost Thermal Energy Storage System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sienicki, James J.; Lv, Qiuping; Moisseytsev, Anton
BgtL, LLC (BgtL) is focused on developing and commercializing its proprietary compact technology for processes in the energy sector. One such application is a compact high efficiency Thermal Energy Storage (TES) system that utilizes the heat of fusion through phase change between solid and liquid to store and release energy at high temperatures and incorporate state-of-the-art insulation to minimize heat dissipation. BgtL’s TES system would greatly improve the economics of existing nuclear and coal-fired power plants by allowing the power plant to store energy when power prices are low and sell power into the grid when prices are high. Comparedmore » to existing battery storage technology, BgtL’s novel thermal energy storage solution can be significantly less costly to acquire and maintain, does not have any waste or environmental emissions, and does not deteriorate over time; it can keep constant efficiency and operates cleanly and safely. BgtL’s engineers are experienced in this field and are able to design and engineer such a system to a specific power plant’s requirements. BgtL also has a strong manufacturing partner to fabricate the system such that it qualifies for an ASME code stamp. BgtL’s vision is to be the leading provider of compact systems for various applications including energy storage. BgtL requests that all technical information about the TES designs be protected as proprietary information. To honor that request, only non-proprietay summaries are included in this report.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mokhov, Nikolai
MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less
PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas
The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-01-01
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs. PMID:28786915
Calculating the n-point correlation function with general and efficient python code
NASA Astrophysics Data System (ADS)
Genier, Fred; Bellis, Matthew
2018-01-01
There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.
Spallation neutron production and the current intra-nuclear cascade and transport codes
NASA Astrophysics Data System (ADS)
Filges, D.; Goldenbaum, F.; Enke, M.; Galin, J.; Herbach, C.-M.; Hilscher, D.; Jahnke, U.; Letourneau, A.; Lott, B.; Neef, R.-D.; Nünighoff, K.; Paul, N.; Péghaire, A.; Pienkowski, L.; Schaal, H.; Schröder, U.; Sterzenbach, G.; Tietze, A.; Tishchenko, V.; Toke, J.; Wohlmuther, M.
A recent renascent interest in energetic proton-induced production of neutrons originates largely from the inception of projects for target stations of intense spallation neutron sources, like the planned European Spallation Source (ESS), accelerator-driven nuclear reactors, nuclear waste transmutation, and also from the application for radioactive beams. In the framework of such a neutron production, of major importance is the search for ways for the most efficient conversion of the primary beam energy into neutron production. Although the issue has been quite successfully addressed experimentally by varying the incident proton energy for various target materials and by covering a huge collection of different target geometries --providing an exhaustive matrix of benchmark data-- the ultimate challenge is to increase the predictive power of transport codes currently on the market. To scrutinize these codes, calculations of reaction cross-sections, hadronic interaction lengths, average neutron multiplicities, neutron multiplicity and energy distributions, and the development of hadronic showers are confronted with recent experimental data of the NESSI collaboration. Program packages like HERMES, LCS or MCNPX master the prevision of reaction cross-sections, hadronic interaction lengths, averaged neutron multiplicities and neutron multiplicity distributions in thick and thin targets for a wide spectrum of incident proton energies, geometrical shapes and materials of the target generally within less than 10% deviation, while production cross-section measurements for light charged particles on thin targets point out that appreciable distinctions exist within these models.
Energy minimization on manifolds for docking flexible molecules
Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima
2015-01-01
In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
Input-output relation and energy efficiency in the neuron with different spike threshold dynamics.
Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin
2015-01-01
Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt) preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC) through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na(+) and K(+) currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism.
Input-output relation and energy efficiency in the neuron with different spike threshold dynamics
Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin
2015-01-01
Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt) preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC) through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na+ and K+ currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism. PMID:26074810
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torcellini, Paul A; Scheib, Jennifer G; Pless, Shanti
New construction could account for more than 25% of the U.S. energy consumption by 2030. Millions of square feet are built every year that will not perform as expected - despite advancing codes, rating systems, super-efficient technologies, and advanced utility programs. With retrofits of these under-performers decades away, savings potential will be lost for years to come. Only the building owner is in the driver's seat to demand - and verify - higher-performing buildings. Yet our current policy and market interventions really target the design team, not the owner. Accelerate Performance, a U.S. Department of Energy funded initiative, is changingmore » the building procurement approach to drive deeper, verified savings in three pilot states: Illinois, Minnesota, and Connecticut. Performance-based procurement ties energy performance to design and contractor team compensation while freeing them to meet energy targets with strategies most familiar to them. The process teases out the creativity of the design and contracting teams to deliver energy performance - without driving up the construction cost. The paper will share early results and lessons learned from new procurement and contract approaches in government, public, and private sector building projects. The paper provides practical guidance for building owners, facilities managers, design, and contractor teams who wish to incorporate effective performance-based procurement for deeper energy savings in their buildings.« less
Simulations of electron transport and ignition for direct-drive fast-ignition targets
NASA Astrophysics Data System (ADS)
Solodov, A. A.; Anderson, K. S.; Betti, R.; Gotcheva, V.; Myatt, J.; Delettrez, J. A.; Skupsky, S.; Theobald, W.; Stoeckl, C.
2008-11-01
The performance of high-gain, fast-ignition fusion targets is investigated using one-dimensional hydrodynamic simulations of implosion and two-dimensional (2D) hybrid fluid-particle simulations of hot-electron transport, ignition, and burn. The 2D/3D hybrid-particle-in-cell code LSP [D. R. Welch et al., Nucl. Instrum. Methods Phys. Res. A 464, 134 (2001)] and the 2D fluid code DRACO [P. B. Radha et al., Phys. Plasmas 12, 056307 (2005)] are integrated to simulate the hot-electron transport and heating for direct-drive fast-ignition targets. LSP simulates the transport of hot electrons from the place where they are generated to the dense fuel core where their energy is absorbed. DRACO includes the physics required to simulate compression, ignition, and burn of fast-ignition targets. The self-generated resistive magnetic field is found to collimate the hot-electron beam, increase the coupling efficiency of hot electrons with the target, and reduce the minimum energy required for ignition. Resistive filamentation of the hot-electron beam is also observed. The minimum energy required for ignition is found for hot electrons with realistic angular spread and Maxwellian energy-distribution function.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1988-01-01
During the period December 1, 1987 through May 31, 1988, progress was made in the following areas: construction of Multi-Dimensional Bandwidth Efficient Trellis Codes with MPSK modulation; performance analysis of Bandwidth Efficient Trellis Coded Modulation schemes; and performance analysis of Bandwidth Efficient Trellis Codes on Fading Channels.
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
2001-01-01
Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.
Seismic imaging using finite-differences and parallel computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ober, C.C.
1997-12-31
A key to reducing the risks and costs of associated with oil and gas exploration is the fast, accurate imaging of complex geologies, such as salt domes in the Gulf of Mexico and overthrust regions in US onshore regions. Prestack depth migration generally yields the most accurate images, and one approach to this is to solve the scalar wave equation using finite differences. As part of an ongoing ACTI project funded by the US Department of Energy, a finite difference, 3-D prestack, depth migration code has been developed. The goal of this work is to demonstrate that massively parallel computersmore » can be used efficiently for seismic imaging, and that sufficient computing power exists (or soon will exist) to make finite difference, prestack, depth migration practical for oil and gas exploration. Several problems had to be addressed to get an efficient code for the Intel Paragon. These include efficient I/O, efficient parallel tridiagonal solves, and high single-node performance. Furthermore, to provide portable code the author has been restricted to the use of high-level programming languages (C and Fortran) and interprocessor communications using MPI. He has been using the SUNMOS operating system, which has affected many of his programming decisions. He will present images created from two verification datasets (the Marmousi Model and the SEG/EAEG 3D Salt Model). Also, he will show recent images from real datasets, and point out locations of improved imaging. Finally, he will discuss areas of current research which will hopefully improve the image quality and reduce computational costs.« less
Least reliable bits coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Wagner, Paul
1992-01-01
LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Measuring effects of climate change and energy efficiency regulations in U.S. households
NASA Astrophysics Data System (ADS)
Koirala, Bishwa Shakha
The first chapter explains the human causes of climate change and its costs, which is estimated to be about 3.6% of GDP by the end of 21 st century (NRDC, 2008). The second chapter investigates how projected July temperatures will increase the demand for electricity in the U.S. by 0.8%, while projected January temperatures will decrease the demand for natural gas and heating oil by 1% and 2.3%, respectively. This chapter further examines effects of the energy-efficiency building codes: IECC 2003 and IECC 2006 in the U.S. in reducing the energy consumption in the U.S. households. This study finds that these state-level building codes are effective in reducing energy demand. Adoption of these codes reduces the electricity demand by 1.8%, natural gas by 1.3% and heating oil by 2.8%. A total of about 7.54 MMT per year emission reduction of CO2 is possible from the residential sector by applying such energy-efficiency building codes. This chapter further estimates an average of 1,342 kWh/Month of electricity consumption, 3,429 CFt/Month of natural gas consumption and 277 Gallon/Year of heating oil consumption per household. It also indentifies the existence of state heterogeneity that affects household level energy demand, and finds that assumption of independence of error term is violated. Chapter 3 estimates the implicit prices of climate in dollar by analyzing the hedonic rent and wage models for homeowners and apartment renters. The estimated results show that January temperature is a disamenity for which both homeowners and renters are being compensated (negative marginal willingness to pay) through U.S. by 16 and 25 at the 2004 price level per month, respectively. It also finds that the January temperature is productive, whereas the July temperatures and annual precipitation are amenities and less productive. This study suggests that households would be willing to pay for higher temperature and increased precipitation; the estimated threshold point for July temperature is 75°F and for annual precipitation is 50 inches. It further reports that homeowners pay more than renters for climate amenities in the Northeast and West with reference to the Midwest; where as in the South, these values do not differ much, suggesting that firms have incentive to invest in those regions. This chapter also identifies that both the housing and labor markets are segmented across the regions in the U.S. Chapter 4 uses meta-analysis to explore the environmental Kuznets curve (EKC) relationship for CO2 and several other environmental quality measures. Results indicate the presence of an EKC-type relationship for CO 2 and other environmental quality measures in relative terms. However, the predicted value of income turning point for CO2 is both extremely large in relative terms (about 10 times the world GDP per capita at the 2007 price level) and far outside the range of the data. Therefore, this study cannot accept the existence of the EKC relationship for the CO2.
Research on lossless compression of true color RGB image with low time and space complexity
NASA Astrophysics Data System (ADS)
Pan, ShuLin; Xie, ChengJun; Xu, Lin
2008-12-01
Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngirmang, Gregory K., E-mail: ngirmang.1@osu.edu; Orban, Chris; Feister, Scott
We present 3D Particle-in-Cell (PIC) modeling of an ultra-intense laser experiment by the Extreme Light group at the Air Force Research Laboratory using the Large Scale Plasma (LSP) PIC code. This is the first time PIC simulations have been performed in 3D for this experiment which involves an ultra-intense, short-pulse (30 fs) laser interacting with a water jet target at normal incidence. The laser-energy-to-ejected-electron-energy conversion efficiency observed in 2D(3v) simulations were comparable to the conversion efficiencies seen in the 3D simulations, but the angular distribution of ejected electrons in the 2D(3v) simulations displayed interesting differences with the 3D simulations' angular distribution;more » the observed differences between the 2D(3v) and 3D simulations were more noticeable for the simulations with higher intensity laser pulses. An analytic plane-wave model is discussed which provides some explanation for the angular distribution and energies of ejected electrons in the 2D(3v) simulations. We also performed a 3D simulation with circularly polarized light and found a significantly higher conversion efficiency and peak electron energy, which is promising for future experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Carlson, Stephen
This Commercial and Industrial Lighting Controls Evaluation Protocol (the protocol) describes methods to account for energy savings resulting from programmatic installation of lighting control equipment in large populations of commercial, industrial, government, institutional, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. When lighting controls are installed in conjunction with a lighting retrofit project, the lighting control savings must be calculated parametrically with the lighting retrofit project so savings are not double counted.
Comparison of stochastic optimization methods for all-atom folding of the Trp-Cage protein.
Schug, Alexander; Herges, Thomas; Verma, Abhinav; Lee, Kyu Hwan; Wenzel, Wolfgang
2005-12-09
The performances of three different stochastic optimization methods for all-atom protein structure prediction are investigated and compared. We use the recently developed all-atom free-energy force field (PFF01), which was demonstrated to correctly predict the native conformation of several proteins as the global optimum of the free energy surface. The trp-cage protein (PDB-code 1L2Y) is folded with the stochastic tunneling method, a modified parallel tempering method, and the basin-hopping technique. All the methods correctly identify the native conformation, and their relative efficiency is discussed.
High Power Orbit Transfer Vehicle
2003-07-01
multijunction device is a stack of individual single-junction cells in descending order of band gap. The top cell captures the high-energy photons and passes...the rest of the photons on to be absorbed by lower-band-gap cells. Multijunction devices achieve a higher total conversion efficiency because they...minimum temperatures on the thruster modules and main bus. In the MATLAB code for these calculations, maximum and minimum temperatures are plotted
Implementationof a modular software system for multiphysical processes in porous media
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Watanabe, Norihiro; Bilke, Lars; Fischer, Thomas; Lehmann, Christoph; Rink, Karsten; Walther, Marc; Wang, Wenqing; Kolditz, Olaf
2016-04-01
Subsurface georeservoirs are a candidate technology for large scale energy storage required as part of the transition to renewable energy sources. The increased use of the subsurface results in competing interests and possible impacts on protected entities. To optimize and plan the use of the subsurface in large scale scenario analyses,powerful numerical frameworks are required that aid process understanding and can capture the coupled thermal (T), hydraulic (H), mechanical (M), and chemical (C) processes with high computational efficiency. Due to having a multitude of different couplings between basic T, H, M, or C processes and the necessity to implement new numerical schemes the development focus has moved to software's modularity. The decreased coupling between the components results in two major advantages: easier addition of specialized processes and improvement of the code's testability and therefore its quality. The idea of modularization is implemented on several levels, in addition to library based separation of the previous code version, by using generalized algorithms available in the Standard Template Library and the Boost library, relying on efficient implementations of liner algebra solvers, using concepts when designing new types, and localization of frequently accessed data structures. This procedure shows certain benefits for a flexible high-performance framework applied to the analysis of multipurpose georeservoirs.
Computationally efficient description of relativistic electron beam transport in dense plasma
NASA Astrophysics Data System (ADS)
Polomarov, Oleg; Sefkov, Adam; Kaganovich, Igor; Shvets, Gennady
2006-10-01
A reduced model of the Weibel instability and electron beam transport in dense plasma is developed. Beam electrons are modeled by macro-particles and the background plasma is represented by electron fluid. Conservation of generalized vorticity and quasineutrality of the plasma-beam system are used to simplify the governing equations. Our approach is motivated by the conditions of the FI scenario, where the beam density is likely to be much smaller than the plasma density and the beam energy is likely to be very high. For this case the growth rate of the Weibel instability is small, making the modeling of it by conventional PICs exceedingly time consuming. The present approach does not require resolving the plasma period and only resolves a plasma collisionless skin depth and is suitable for modeling a long-time behavior of beam-plasma interaction. An efficient code based on this reduced description is developed and benchmarked against the LSP PIC code. The dynamics of low and high current electron beams in dense plasma is simulated. Special emphasis is on peculiarities of its non-linear stages, such as filament formation and merger, saturation and post-saturation field and energy oscillations. *Supported by DOE Fusion Science through grant DE-FG02-05ER54840.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, K.A.; Tahir, N.A.
In this paper we present an analysis of the theory of the energy deposition of ions in cold materials and hot dense plasmas together with numerical calculations for heavy and light ions of interest to ion-beam fusion. We have used the g-smcapso-smcapsr-smcapsg-smcapso-smcapsn-smcaps computer code of Long, Moritz, and Tahir (which is an extension of the code originally written for protons by Nardi, Peleg, and Zinamon) to carry out these calculations. The energy-deposition data calculated in this manner has been used in the design of heavy-ion-beam-driven fusion targets suitable for a reactor, by its inclusion in the m-smcapse-smcapsd-smcapsu-smcapss-smcapsa-smcaps code of Christiansen,more » Ashby, and Roberts as extended by Tahir and Long. A number of other improvements have been made in this code and these are also discussed. Various aspects of the theoretical analysis of such targets are discussed including the calculation of the hydrodynamic stability, the hydrodynamic efficiency, and the gain. Various different target designs have been used, some of them new. In general these targets are driven by Bi/sup +/ ions of energy 8--12 GeV, with an input energy of 4--6.5 MJ, with output energies in the range 600--900 MJ, and with gains in the range 120--180. The peak powers are in the range of 500--750 TW. We present detailed calculations of the ablation, compression, ignition, and burn phases. By the application of a new stability analysis which includes ablation and density-gradient effects we show that these targets appear to implode in a stable manner. Thus the targets designed offer working examples suited for use in a future inertial-confinement fusion reactor.« less
NASA Astrophysics Data System (ADS)
Saizu, Mirela Angela
2016-09-01
The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.
Oliva, Eduardo; Zeitoun, Philippe; Velarde, Pedro; Fajardo, Marta; Cassou, Kevin; Ros, David; Sebban, Stephan; Portillo, David; le Pape, Sebastien
2010-11-01
Plasma-based seeded soft-x-ray lasers have the potential to generate high energy and highly coherent short pulse beams. Due to their high density, plasmas created by the interaction of an intense laser with a solid target should store the highest amount of energy density among all plasma amplifiers. Our previous numerical work with a two-dimensional (2D) adaptive mesh refinement hydrodynamic code demonstrated that careful tailoring of plasma shapes leads to a dramatic enhancement of both soft-x-ray laser output energy and pumping efficiency. Benchmarking of our 2D hydrodynamic code in previous experiments demonstrated a high level of confidence, allowing us to perform a full study with the aim of the way for 10-100 μJ seeded soft-x-ray lasers. In this paper, we describe in detail the mechanisms that drive the hydrodynamics of plasma columns. We observed transitions between narrow plasmas, where very strong bidimensional flow prevents them from storing energy, to large plasmas that store a high amount of energy. Millimeter-sized plasmas are outstanding amplifiers, but they have the limitation of transverse lasing. In this paper, we provide a preliminary solution to this problem.
libvdwxc: a library for exchange-correlation functionals in the vdW-DF family
NASA Astrophysics Data System (ADS)
Hjorth Larsen, Ask; Kuisma, Mikael; Löfgren, Joakim; Pouillon, Yann; Erhart, Paul; Hyldgaard, Per
2017-09-01
We present libvdwxc, a general library for evaluating the energy and potential for the family of vdW-DF exchange-correlation functionals. libvdwxc is written in C and provides an efficient implementation of the vdW-DF method and can be interfaced with various general-purpose DFT codes. Currently, the Gpaw and Octopus codes implement interfaces to libvdwxc. The present implementation emphasizes scalability and parallel performance, and thereby enables ab initio calculations of nanometer-scale complexes. The numerical accuracy is benchmarked on the S22 test set whereas parallel performance is benchmarked on ligand-protected gold nanoparticles ({{Au}}144{({{SC}}11{{NH}}25)}60) up to 9696 atoms.
Harnessing high-dimensional hyperentanglement through a biphoton frequency comb
NASA Astrophysics Data System (ADS)
Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee
2015-08-01
Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.
NASA Astrophysics Data System (ADS)
Thoyre, Autumn
In this research, I have analyzed the production of consuming less electricity through a case study of promotions of compact fluorescent light bulbs (CFLs). I focused on the CFL because it has been heavily promoted by environmentalists and electricity companies as a key tool for solving climate change, yet such promotions appear counter-intuitive. The magnitude of CFL promotions by environmentalists is surprising because CFLs can only impact less than 1% of U.S. greenhouse gas emissions. CFL promotions by electricity providers are surprising given such companies' normal incentives to sell more of their product. I used political ecological and symbolic interactionist theories, qualitative methods of data collection (including interviews, participant-observation, texts, and images), and a grounded theory analysis to understand this case. My findings suggest that, far from being a self-evident technical entity, energy efficiency is produced as an idea, a part of identities, a resource, and a source of value through social, political, and economic processes. These processes include identity formation and subjectification; gender-coded household labor; and corporate appropriation of household value resulting from environmental governance. I show how environmentalists use CFLs to make and claim neoliberal identities, proposing the concept of green neoliberal identity work as a mechanism through which neoliberal ideologies are translated into practices. I analyze how using this seemingly easy energy efficient technology constitutes labor that is gendered in ways that reflect and reproduce inequalities. I show how electricity companies have used environmental governance to valorize and appropriate home energy efficiency as an accumulation strategy. I conclude by discussing the symbolic power of CFLs, proposing a theory of green obsolescence, and framing the production of energy efficiency as a global production network. I found that promoting energy efficiency involves consuming less energy by consuming more technologies. This research contributes to understandings of how environmentalists become laboring subjects in an era of neoliberalism and how energy companies are responding to the threat of climate change by turning mitigation into an opportunity for profit.
An Efficient Variable Length Coding Scheme for an IID Source
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.
Coding For Compression Of Low-Entropy Data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehrhart, Brian David; Gill, David Dennis
The current study has examined four cases of a central receiver concentrated solar power plant with thermal energy storage using the DELSOL and SOLERGY computer codes. The current state-of-the-art base case was compared with a theoretical high temperature case which was based on the scaling of some input parameters and the estimation of other parameters based on performance targets from the Department of Energy SunShot Initiative. This comparison was done for both current and high temperature cases in two configurations: a surround field with an external cylindrical receiver and a north field with a single cavity receiver. There is amore » fairly dramatic difference between the design point and annual average performance, especially in the solar field and receiver subsystems, and also in energy losses due to the thermal energy storage being full to capacity. Additionally, there are relatively small differences (<2%) in annual average efficiencies between the Base and High Temperature cases, despite an increase in thermal to electric conversion efficiency of over 8%. This is due the increased thermal losses at higher temperature and operational losses due to subsystem start-up and shut-down. Thermal energy storage can mitigate some of these losses by utilizing larger thermal energy storage to ensure that the electric power production system does not need to stop and re-start as often, but solar energy is inherently transient. Economic and cost considerations were not considered here, but will have a significant impact on solar thermal electric power production strategy and sizing.« less
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.
Galactic Cosmic Ray Event-Based Risk Model (GERM) Code
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.
2013-01-01
This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.
Small passenger car transmission test-Chevrolet 200 transmission
NASA Technical Reports Server (NTRS)
Bujold, M. P.
1980-01-01
The small passenger car transmission was tested to supply electric vehicle manufacturers with technical information regarding the performance of commerically available transmissions which would enable them to design a more energy efficient vehicle. With this information the manufacturers could estimate vehicle driving range as well as speed and torque requirements for specific road load performance characteristics. A 1979 Chevrolet Model 200 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. The transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. Torque, speed and efficiency curves map the complete performance characteristics for Chevrolet Model 200 transmission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casarini, L.; Bonometto, S.A.; Tessarotto, E.
2016-08-01
We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less
Numerical Study of HHFW Heating in FRC Plasmas
NASA Astrophysics Data System (ADS)
Ceccherini, Francesco; Galeotti, Laura; Brambilla, Marco; Dettrick, Sean; Yang, Xiaokang; TAE Team
2017-10-01
The TriAlpha Energy (TAE) code RF-Pisa is a Finite Larmor Radius (FLR) full wave code developed over the years to study RF heating in the Field Reversed Configuration (FRC) in both the ion and electron cyclotron regimes. The FLR approximation is perfectly adequate to address RF propagation and absorption at the fundamental and second harmonic frequencies (as in the minority heating scheme), but it is not able to describe higher order processes such as high-harmonic fast waves (HHFW). The latter ones have frequencies lying between the ion cyclotron and lower hybrid resonances and they may represent a viable path to develop an efficient method to deposit energy inside the FRC separatrix, as suggested by recent results obtained at NSTX. A significant upgrade of RF-Pisa to include HHFW has been undertaken. In particular, the so-called ``quasi local approximation'' originally proposed for toroidal geometries has been re-derived for the cylindrical geometry and a new HHFW version of RF-Pisa concurrent to the FLR version has been developed. Here we present the first results of the application of the new code to FRC equilibria and we discuss the features of the dispersion relations and the absorption processes which characterize this novel regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.; Gaufridy de Dortan, F. de
A collisional-radiative model describing nonlocal-thermodynamic-equilibrium plasmas is developed. It is based on the HULLAC (Hebrew University Lawrence Livermore Atomic Code) suite for the transitions rates, in the zero-temperature radiation field hypothesis. Two variants of the model are presented: the first one is configuration averaged, while the second one is a detailed level version. Comparisons are made between them in the case of a carbon plasma; they show that the configuration-averaged code gives correct results for an electronic temperature T{sub e}=10 eV (or higher) but fails at lower temperatures such as T{sub e}=1 eV. The validity of the configuration-averaged approximation ismore » discussed: the intuitive criterion requiring that the average configuration-energy dispersion must be less than the electron thermal energy turns out to be a necessary but far from sufficient condition. Another condition based on the resolution of a modified rate-equation system is proposed. Its efficiency is emphasized in the case of low-temperature plasmas. Finally, it is shown that near-threshold autoionization cascade processes may induce a severe failure of the configuration-average formalism.« less
An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.
Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V
2018-04-01
Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic, prosthetic, and industrial applications.
Everyone wins - a program to upgrade energy efficiency in manufactured housing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, A.D.; Onisko, S.A.; Sandahl, L.J.
1994-03-01
Other regions might well benefit from this case history, illustrating how a region marshalled its resources to bring manufactured housing--a significant share of its new residential sector--into the modern era of energy efficiency. Everyone was a winner. In the Pacific Northwest, as in many parts of the country, a significant proportion of new homes are HUD-code manufactured, or so-called mobile, homes. About 25% of new single-family houses in the Pacific Northwest are manufactured homes. They represent an even larger share - nearly 40% - of new electrically heated housing in the region, and this share has been growing. When Congressmore » enacted the Pacific Northwest Power Planning Act of 1980, it also permitted the four Northwest states to establish an interstate compact body - the Northwest Power Planning Council - and required the Council to produce an integrated resource plan for the region served by the Bonneville Power Administration, the federal power marketing and transmission agency that operates the region's major transmission grid and sells most of its bulk power. Both the law and the plan charge Bonneville with developing cost-effective programs to save electricity in all end-use sectors through improved energy efficiency.« less
Numerical studies on alpha production from high energy proton beam interaction with Boron
NASA Astrophysics Data System (ADS)
Moustaizis, S. D.; Lalousis, P.; Hora, H.; Korn, G.
2017-05-01
Numerical investigations on high energy proton beam interaction with high density Boron plasma allows to simulate conditions concerning the alpha production from recent experimental measurements . The experiments measure the alpha production due to p11B nuclear fusion reactions when a laser-driven high energy proton beam interacts with Boron plasma produced by laser beam interaction with solid Boron. The alpha production and consequently the efficiency of the process depends on the initial proton beam energy, proton beam density, the Boron plasma density and temperature, and their temporal evolution. The main advantage for the p11B nuclear fusion reaction is the production of three alphas with total energy of 8.9 MeV, which could enhance the alpha heating effect and improve the alpha production. This particular effect is termed in the international literature as the alpha avalanche effect. Numerical results using a multi-fluid, global particle and energy balance, code shows the alpha production efficiency as a function of the initial energy of the proton beam, the Boron plasma density, the initial Boron plasma temperature and the temporal evolution of the plasma parameters. The simulations enable us to determine the interaction conditions (proton beam - B plasma) for which the alpha heating effect becomes important.
NASA Astrophysics Data System (ADS)
Ofori-Boadu, Andrea N. Y. A.
High energy consumption in the United States has been influenced by populations, climates, income and other contextual factors. In the past decades, U.S. energy policies have pursued energy efficiency as a national strategy for reducing U.S. environmental degradation and dependence on foreign oils. The quest for improved energy efficiency has led to the development of energy efficient technologies and programs. The implementation of energy programs in the complex U.S. socio-technical environment is believed to promote the diffusion of energy efficiency technologies. However, opponents doubt the fact that these programs have the capacity to significantly reduce U.S. energy consumption. In order to contribute to the ongoing discussion, this quantitative study investigated the relationships existing among electricity consumption/ intensity, energy programs and contextual factors in the U.S. buildings sector. Specifically, this study sought to identify the significant predictors of electricity consumption and intensity, as well as estimate the overall impact of selected energy programs on electricity consumption and intensity. Using state-level secondary data for 51 U.S. states from 2006 to 2009, seven random effects panel data regression models confirmed the existence of significant relationships among some energy programs, contextual factors, and electricity consumption/intensity. The most significant predictors of improved electricity efficiency included the price of electricity, public benefits funds program, building energy codes program, financial and informational incentives program and the Leadership in Energy and Environmental Design (LEED) program. Consistently, the Southern region of the U.S. was associated with high electricity consumption and intensity; while the U.S. commercial sector was the greater benefactor from energy programs. On the average, energy programs were responsible for approximately 7% of the variation observed in electricity consumption and intensity, over and above the variation associated with the contextual factors. This study also had implications in program implementation theory, and revealed that resource availability, stringency and adherence had significant impacts on program outcomes. Using seven classification tables, this study categorized and matched the predictors of electricity consumption and intensity with the specific energy sectors in which they demonstrated significance. Project developers, energy advocates, policy makers, program administrators, building occupants and other stakeholders could use study findings in conjunction with other empirical findings, to make informed decisions regarding the adoption, continuation or discontinuation of energy programs, while taking contextual factors into consideration. The adoption and efficient implementation of the most significant programs could reduce U.S. electricity consumption, and in the long term, probably reduce U.S. energy waste, environmental degradation, energy imports, energy prices, and demands for expanding energy generation and distribution infrastructure.
Performance analysis of parallel gravitational N-body codes on large GPU clusters
NASA Astrophysics Data System (ADS)
Huang, Si-Yi; Spurzem, Rainer; Berczik, Peter
2016-01-01
We compare the performance of two very different parallel gravitational N-body codes for astrophysical simulations on large Graphics Processing Unit (GPU) clusters, both of which are pioneers in their own fields as well as on certain mutual scales - NBODY6++ and Bonsai. We carry out benchmarks of the two codes by analyzing their performance, accuracy and efficiency through the modeling of structure decomposition and timing measurements. We find that both codes are heavily optimized to leverage the computational potential of GPUs as their performance has approached half of the maximum single precision performance of the underlying GPU cards. With such performance we predict that a speed-up of 200 - 300 can be achieved when up to 1k processors and GPUs are employed simultaneously. We discuss the quantitative information about comparisons of the two codes, finding that in the same cases Bonsai adopts larger time steps as well as larger relative energy errors than NBODY6++, typically ranging from 10 - 50 times larger, depending on the chosen parameters of the codes. Although the two codes are built for different astrophysical applications, in specified conditions they may overlap in performance at certain physical scales, thus allowing the user to choose either one by fine-tuning parameters accordingly.
DCT based interpolation filter for motion compensation in HEVC
NASA Astrophysics Data System (ADS)
Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin
2012-10-01
High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1989-01-01
The performance of bandwidth efficient trellis codes on channels with phase jitter, or those disturbed by jamming and impulse noise is analyzed. An heuristic algorithm for construction of bandwidth efficient trellis codes with any constraint length up to about 30, any signal constellation, and any code rate was developed. Construction of good distance profile trellis codes for sequential decoding and comparison of random coding bounds of trellis coded modulation schemes are also discussed.
NASA Astrophysics Data System (ADS)
Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang
2017-02-01
LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.
Neutron skyshine from intense 14-MeV neutron source facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T.; Hayashi, K.; Takahashi, A.
1985-07-01
The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less
QOS-aware error recovery in wireless body sensor networks using adaptive network coding.
Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah
2014-12-29
Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.
NASA Technical Reports Server (NTRS)
Jaffe, Richard L.; Pattengill, Merle D.; Schwenke, David W.
1989-01-01
Strategies for constructing global potential energy surfaces from a limited number of accurate ab initio electronic energy calculations are discussed. Generally, these data are concentrated in small regions of configuration space (e.g., in the vicinity of saddle points and energy minima) and difficulties arise in generating a potential function that is globally well-behaved. Efficient computer codes for carrying out classical trajectory calculations on vector and parallel processors are also described. Illustrations are given from recent work on the following chemical systems: Ca + HF yields CaF + H, H + H + H2 yields H2 + H2, N + O2 yields NO + O and O + N2 yields NO + N. The dynamics and kinetics of metathesis, dissociation, recombination, energy transfer and complex formation processes will be discussed.
Energy-efficient ovens for unpolluted balady bread
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gadalla, M.A.; Mansour, M.S.; Mahdy, E.
A new bread oven has been developed, tested and presented in this work for local balady bread. The design has the advantage of being efficient and producing unpolluted bread. An extensive study of the conventional and available designs has been carried out in order to help developing the new design. Evaluation of the conventional design is based on numerous tests and measurements. A computer code utilizing the indirect method has been developed to evaluate the thermal performance of the tested ovens. The present design achieves higher thermal efficiency of about 50% than the conventional ones. In addition, its capital costmore » is much cheaper than other imported designs. Thus, the present design achieves higher efficiency, pollutant free products and less cost. Moreover, it may be modified for different types of bread baking systems.« less
Surface passivation of InP solar cells with InAlAs layers
NASA Technical Reports Server (NTRS)
Jain, Raj K.; Flood, Dennis J.; Landis, Geoffrey A.
1993-01-01
The efficiency of indium phosphide solar cells is limited by high values of surface recombination. The effect of a lattice-matched In(0.52)Al(0.48)As window layer material for InP solar cells, using the numerical code PC-1D is investigated. It was found that the use of InAlAs layer significantly enhances the p(+)n cell efficiency, while no appreciable improvement is seen for n(+)p cells. The conduction band energy discontinuity at the heterojunction helps in improving the surface recombination. An optimally designed InP cell efficiency improves from 15.4 percent to 23 percent AMO for a 10 nm thick InAlAs layer. The efficiency improvement reduces with increase in InAlAs layer thickness, due to light absorption in the window layer.
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
Guide to Operating and Maintaining EnergySmart Schools
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Through a commitment to high performance, school districts are discovering that smart energy choices can create lasting benefits for students, communities, and the environment. For example, an energy efficient school district with 4,000 students can save as much as $160,000 a year in energy costs. Over 10 years, those savings can reach $1.6 million, translating into the ability to hire more teachers, purchase more textbooks and computers, or invest in additional high performance facilities. Beyond these bottomline benefits, schools can better foster student health, decrease absenteeism, and serve as centers of community life. The U.S. Department of Energy's EnergySmart Schoolsmore » Program promotes a 30 percent improvement in existing school energy use. It also encourages the building of new schools that exceed code (ASHRAE 90.11999) by 50 percent or more. The program provides resources like this Guide to Operating and Maintaining EnergySmart Schools to assist school decisionmakers in planning, financing, operating, and maintaining energy efficient, high performance schools. It also offers education and training for building industry professionals. Operations and maintenance refer to all scheduled and unscheduled actions for preventing equipment failure or decline with the goal of increasing efficiency, reliability, and safety. A preventative maintenance program is the organized and planned performance of maintenance activities in order to prevent system or production problems or failures from occurring. In contrast, deferred maintenance or reactive maintenance (also called diagnostic or corrective maintenance) is conducted to address an existing problem. This guide is a primary resource for developing and implementing a districtor schoolwide operations and maintenance (O&M) program that focuses on energy efficiency. The EnergySmart Schools Solutions companion CD contains additional supporting information for design, renovation, and retrofit projects. The objective of this guide is to provide organizational and technical information for integrating energy and high performance facility management into existing O&M practices. The guide allows users to adapt and implement suggested O&M strategies to address specific energy efficiency goals. It recognizes and expands on existing tools and resources that are widely used throughout the high performance school industry. External resources are referenced throughout the guide and are also listed within the EnergySmart Schools O&M Resource List (Appendix J). While this guide emphasizes the impact of the energy efficiency component of O&M, it encourages taking a holistic approach to maintaining a high-performance school. This includes considering various environmental factors where energy plays an indirect or direct role. For example, indoor air quality, site selection, building orientation, and water efficiency should be considered. Resources to support these overlapping aspects will be cited throughout the guide.« less
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
NASA Astrophysics Data System (ADS)
Feng, Bing
Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.
A preliminary design of the collinear dielectric wakefield accelerator
NASA Astrophysics Data System (ADS)
Zholents, A.; Gai, W.; Doran, S.; Lindberg, R.; Power, J. G.; Strelnikov, N.; Sun, Y.; Trakhtenberg, E.; Vasserman, I.; Jing, C.; Kanareykin, A.; Li, Y.; Gao, Q.; Shchegolkov, D. Y.; Simakov, E. I.
2016-09-01
A preliminary design of the multi-meter long collinear dielectric wakefield accelerator that achieves a highly efficient transfer of the drive bunch energy to the wakefields and to the witness bunch is considered. It is made from 0.5 m long accelerator modules containing a vacuum chamber with dielectric-lined walls, a quadrupole wiggler, an rf coupler, and BPM assembly. The single bunch breakup instability is a major limiting factor for accelerator efficiency, and the BNS damping is applied to obtain the stable multi-meter long propagation of a drive bunch. Numerical simulations using a 6D particle tracking computer code are performed and tolerances to various errors are defined.
Evaluation of a Stirling engine heater bypass with the NASA Lewis nodal-analysis performance code
NASA Technical Reports Server (NTRS)
Sullivan, T. J.
1986-01-01
In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Research Center investigated whether bypassing the P-40 Stirling engine heater during regenerative cooling would improve engine performance. The Lewis nodal-analysis Stirling engine computer simulation was used for this investigation. Results for the heater-bypass concept showed no significant improvement in the indicated thermal efficiency for the P-40 Stirling engine operating at full-power and part-power conditions. Optimizing the heater tube length produced a small increase in the indicated thermal efficiency with the heater-bypass concept.
Efficient parallel simulation of CO2 geologic sequestration insaline aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Doughty, Christine; Wu, Yu-Shu
2007-01-01
An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The newmore » parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.« less
Silencing of the pentose phosphate pathway genes influences DNA replication in human fibroblasts.
Fornalewicz, Karolina; Wieczorek, Aneta; Węgrzyn, Grzegorz; Łyżeń, Robert
2017-11-30
Previous reports and our recently published data indicated that some enzymes of glycolysis and the tricarboxylic acid cycle can affect the genome replication process by changing either the efficiency or timing of DNA synthesis in human normal cells. Both these pathways are connected with the pentose phosphate pathway (PPP pathway). The PPP pathway supports cell growth by generating energy and precursors for nucleotides and amino acids. Therefore, we asked if silencing of genes coding for enzymes involved in the pentose phosphate pathway may also affect the control of DNA replication in human fibroblasts. Particular genes coding for PPP pathway enzymes were partially silenced with specific siRNAs. Such cells remained viable. We found that silencing of the H6PD, PRPS1, RPE genes caused less efficient enterance to the S phase and decrease in efficiency of DNA synthesis. On the other hand, in cells treated with siRNA against G6PD, RBKS and TALDO genes, the fraction of cells entering the S phase was increased. However, only in the case of G6PD and TALDO, the ratio of BrdU incorporation to DNA was significantly changed. The presented results together with our previously published studies illustrate the complexity of the influence of genes coding for central carbon metabolism on the control of DNA replication in human fibroblasts, and indicate which of them are especially important in this process. Copyright © 2017 Elsevier B.V. All rights reserved.
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
Vincenti, H.; Lobet, M.; Lehe, R.; ...
2016-09-19
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries: OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Lobet, M.; Lehe, R.
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries: OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.
2011-02-01
This best practices guide is the twelfth in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the cold and very cold climates can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. Themore » best practices described in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and thos erequirements are highlighted in the text. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vine, E.
Based on an evaluation of 10 residential new construction programs, primarily sponsored by investor-owned utilities in the United States, we find that many of these programs are in dire straits and are in danger of being discontinued because current inclusion of only direct program effects leads to the conclusion that they are not cost-effective. We believe that the cost-effectiveness of residential new construction programs can be improved by: (1) promoting technologies and advanced building design practices that significantly exceed state and federal standards; (2) reducing program marketing costs and developing more effective marketing strategies; (3) recognizing the role of thesemore » programs in increasing compliance with existing state building codes; and (4) allowing utilities to obtain an ``energy-savings credit`` from utility regulators for program spillover (market transformation) impacts. Utilities can also leverage their resources in seizing these opportunities by forming strong and trusting partnerships with the building community and with local and state government.« less
The employment impacts of economy-wide investments in renewable energy and energy efficiency
NASA Astrophysics Data System (ADS)
Garrett-Peltier, Heidi
This dissertation examines the employment impacts of investments in renewable energy and energy efficiency in the U.S. A broad expansion of the use of renewable energy in place of carbon-based energy, in addition to investments in energy efficiency, comprise a prominent strategy to slow or reverse the effects of anthropogenic climate change. This study first explores the literature on the employment impacts of these investments. This literature to date consists mainly of input-output (I-O) studies or case studies of renewable energy and energy efficiency (REEE). Researchers are constrained, however, by their ability to use the I-O model to study REEE, since currently industrial codes do not recognize this industry as such. I develop and present two methods to use the I-O framework to overcome this constraint: the synthetic and integrated approaches. In the former, I proxy the REEE industry by creating a vector of final demand based on the industrial spending patterns of REEE firms as found in the secondary literature. In the integrated approach, I collect primary data through a nationwide survey of REEE firms and integrate these data into the existing I-O tables to explicitly identify the REEE industry and estimate the employment impacts resulting from both upstream and downstream linkages with other industries. The size of the REEE employment multiplier is sensitive to the choice of method, and is higher using the synthetic approach than using the integrated approach. I find that using both methods, the employment level per $1 million demand is approximately three times greater for the REEE industry than for fossil fuel (FF) industries. This implies that a shift to clean energy will result in positive net employment impacts. The positive effects stem mainly from the higher labor intensity of REEE in relation to FF, as well as from higher domestic content and lower average wages. The findings suggest that as we transition away from a carbon-based energy system to more sustainable and low-carbon energy sources, approximately three jobs will be created in clean energy sectors for each job lost in the fossil fuel sector.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Liu, Bing
This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less
NASA Astrophysics Data System (ADS)
Sikder, Somali; Ghosh, Shila
2018-02-01
This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Taylor, Zachary T.
ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype buildingmore » models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.« less
Methodology for the preliminary design of high performance schools in hot and humid climates
NASA Astrophysics Data System (ADS)
Im, Piljae
A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the accelerated dissemination of energy efficient design. For the development of the toolkit, first, a survey was performed to identify high performance measures available today being implemented in new K-5 school buildings. Then an existing case-study school building in a hot and humid climate was selected and analyzed to understand the energy use pattern in a school building and to be used in developing a calibrated simulation. Based on the information from the previous step, an as-built and calibrated simulation was then developed. To accomplish this, five calibration steps were performed to match the simulation results with the measured energy use. The five steps include: (1) Using an actual 2006 weather file with measured solar radiation, (2) Modifying lighting & equipment schedule using ASHRAE's RP-1093 methods, (3) Using actual equipment performance curves (i.e., scroll chiller), (4) Using the Winkelmann's method for the underground floor heat transfer, and (5) Modifying the HVAC and room setpoint temperature based on the measured field data. Next, the calibrated simulation of the case-study K-5 school was compared to an ASHRAE Standard 90.1-1999 code-compliant school. In the next step, the energy savings potentials from the application of several high performance measures to an equivalent ASHRAE Standard 90.1-1999 code-compliant school. The high performance measures applied included the recommendations from the ASHRAE Advanced Energy Design Guides (AEDG) for K-12 and other high performance measures from the literature review as well as a daylighting strategy and solar PV and thermal systems. The results show that the net energy consumption of the final high performance school with the solar thermal and a solar PV system would be 1,162.1 MMBtu, which corresponds to the 14.9 kBtu/sqft-yr of EUI. The calculated final energy and cost savings over the code compliant school are 68.2% and 69.9%, respectively. As a final step of the research, specifications for a simplified easy-to-use toolkit were then developed, and a prototype screenshot of the toolkit was developed. The toolkit is expected to be used by non-technical decision-maker to select and evaluate high performance measures for a new school building in terms of energy and cost savings in a quick and easy way.
Taheri-Garavand, Amin; Karimi, Fatemeh; Karimi, Mahmoud; Lotfi, Valiullah; Khoobbakht, Golmohammad
2018-06-01
The aim of the study is to fit models for predicting surfaces using the response surface methodology and the artificial neural network to optimize for obtaining the maximum acceptability using desirability functions methodology in a hot air drying process of banana slices. The drying air temperature, air velocity, and drying time were chosen as independent factors and moisture content, drying rate, energy efficiency, and exergy efficiency were dependent variables or responses in the mentioned drying process. A rotatable central composite design as an adequate method was used to develop models for the responses in the response surface methodology. Moreover, isoresponse contour plots were useful to predict the results by performing only a limited set of experiments. The optimum operating conditions obtained from the artificial neural network models were moisture content 0.14 g/g, drying rate 1.03 g water/g h, energy efficiency 0.61, and exergy efficiency 0.91, when the air temperature, air velocity, and drying time values were equal to -0.42 (74.2 ℃), 1.00 (1.50 m/s), and -0.17 (2.50 h) in the coded units, respectively.
Implementation of Energy Code Controls Requirements in New Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike
Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less
Energy Efficiency Maximization of Practical Wireless Communication Systems
NASA Astrophysics Data System (ADS)
Eraslan, Eren
Energy consumption of the modern wireless communication systems is rapidly growing due to the ever-increasing data demand and the advanced solutions employed in order to address this demand, such as multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. These MIMO systems are power hungry, however, they are capable of changing the transmission parameters, such as number of spatial streams, number of transmitter/receiver antennas, modulation, code rate, and transmit power. They can thus choose the best mode out of possibly thousands of modes in order to optimize an objective function. This problem is referred to as the link adaptation problem. In this work, we focus on the link adaptation for energy efficiency maximization problem, which is defined as choosing the optimal transmission mode to maximize the number of successfully transmitted bits per unit energy consumed by the link. We model the energy consumption and throughput performances of a MIMO-OFDM link and develop a practical link adaptation protocol, which senses the channel conditions and changes its transmission mode in real-time. It turns out that the brute force search, which is usually assumed in previous works, is prohibitively complex, especially when there are large numbers of transmit power levels to choose from. We analyze the relationship between the energy efficiency and transmit power, and prove that energy efficiency of a link is a single-peaked quasiconcave function of transmit power. This leads us to develop a low-complexity algorithm that finds a near-optimal transmit power and take this dimension out of the search space. We further prune the search space by analyzing the singular value decomposition of the channel and excluding the modes that use higher number of spatial streams than the channel can support. These algorithms and our novel formulations provide simpler computations and limit the search space into a much smaller set; hence reducing the computational complexity by orders of magnitude without sacrificing the performance. The result of this work is a highly practical link adaptation protocol for maximizing the energy efficiency of modern wireless communication systems. Simulation results show orders of magnitude gain in the energy efficiency of the link. We also implemented the link adaptation protocol on real-time MIMO-OFDM radios and we report on the experimental results. To the best of our knowledge, this is the first reported testbed that is capable of performing energy-efficient fast link adaptation using PHY layer information.
High efficiency inductive output tubes with intense annular electron beams
NASA Astrophysics Data System (ADS)
Appanam Karakkad, J.; Matthew, D.; Ray, R.; Beaudoin, B. L.; Narayan, A.; Nusinovich, G. S.; Ting, A.; Antonsen, T. M.
2017-10-01
For mobile ionospheric heaters, it is necessary to develop highly efficient RF sources capable of delivering radiation in the frequency range from 3 to 10 MHz with an average power at a megawatt level. A promising source, which is capable of offering these parameters, is a grid-less version of the inductive output tube (IOT), also known as a klystrode. In this paper, studies analyzing the efficiency of grid-less IOTs are described. The basic trade-offs needed to reach high efficiency are investigated. In particular, the trade-off between the peak current and the duration of the current micro-pulse is analyzed. A particle in the cell code is used to self-consistently calculate the distribution in axial and transverse momentum and in total electron energy from the cathode to the collector. The efficiency of IOTs with collectors of various configurations is examined. It is shown that the efficiency of IOTs can be in the 90% range even without using depressed collectors.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng
2012-01-01
The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451
Flow Analysis of a Gas Turbine Low- Pressure Subsystem
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1997-01-01
The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.
Meher, J K; Meher, P K; Dash, G N; Raval, M K
2012-01-01
The first step in gene identification problem based on genomic signal processing is to convert character strings into numerical sequences. These numerical sequences are then analysed spectrally or using digital filtering techniques for the period-3 peaks, which are present in exons (coding areas) and absent in introns (non-coding areas). In this paper, we have shown that single-indicator sequences can be generated by encoding schemes based on physico-chemical properties. Two new methods are proposed for generating single-indicator sequences based on hydration energy and dipole moments. The proposed methods produce high peak at exon locations and effectively suppress false exons (intron regions having greater peak than exon regions) resulting in high discriminating factor, sensitivity and specificity.
Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian
2018-05-01
We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
Space shuttle main engine numerical modeling code modifications and analysis
NASA Technical Reports Server (NTRS)
Ziebarth, John P.
1988-01-01
The user of computational fluid dynamics (CFD) codes must be concerned with the accuracy and efficiency of the codes if they are to be used for timely design and analysis of complicated three-dimensional fluid flow configurations. A brief discussion of how accuracy and efficiency effect the CFD solution process is given. A more detailed discussion of how efficiency can be enhanced by using a few Cray Research Inc. utilities to address vectorization is presented and these utilities are applied to a three-dimensional Navier-Stokes CFD code (INS3D).
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
...: Notice. SUMMARY: The DOE participates in the code development process of the International Code Council... notice outlines the process by which DOE produces code change proposals, and participates in the ICC code development process. FOR FURTHER INFORMATION CONTACT: Jeremiah Williams, U.S. Department of Energy, Office of...
Strategies for Energy Efficient Resource Management of Hybrid Programming Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dong; Supinski, Bronis de; Schulz, Martin
2013-01-01
Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu
2006-04-17
TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less
Mechanism on brain information processing: Energy coding
NASA Astrophysics Data System (ADS)
Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa
2006-09-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.
Energy coding in biological neural networks
Zhang, Zhikang
2007-01-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model’s ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function. PMID:19003513
2016-01-01
The Annual Energy Outlook 2016 (AEO2016) Extended Policies case includes selected policies that go beyond current laws and regulations. Existing tax credits that have scheduled reductions and sunset dates are assumed to remain unchanged through 2040. Other efficiency policies, including corporate average fuel economy standards, appliance standards, and building codes, are expanded beyond current provisions; and the U.S. Environmental Protection Agency (EPA) Clean Power Plan (CPP) regulations that reduce carbon dioxide emissions from electric power generation are tightened after 2030.
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kemp, G. E., E-mail: kemp10@llnl.gov; Colvin, J. D.; Fournier, K. B.
2015-05-15
Tailored, high-flux, multi-keV x-ray sources are desirable for studying x-ray interactions with matter for various civilian, space and military applications. For this study, we focus on designing an efficient laser-driven non-local thermodynamic equilibrium 3–5 keV x-ray source from photon-energy-matched Ar K-shell and Ag L-shell targets at sub-critical densities (∼n{sub c}/10) to ensure supersonic, volumetric laser heating with minimal losses to kinetic energy, thermal x rays and laser-plasma instabilities. Using HYDRA, a multi-dimensional, arbitrary Lagrangian-Eulerian, radiation-hydrodynamics code, we performed a parameter study by varying initial target density and laser parameters for each material using conditions readily achievable on the National Ignition Facilitymore » (NIF) laser. We employ a model, benchmarked against Kr data collected on the NIF, that uses flux-limited Lee-More thermal conductivity and multi-group implicit Monte-Carlo photonics with non-local thermodynamic equilibrium, detailed super-configuration accounting opacities from CRETIN, an atomic-kinetics code. While the highest power laser configurations produced the largest x-ray yields, we report that the peak simulated laser to 3–5 keV x-ray conversion efficiencies of 17.7% and 36.4% for Ar and Ag, respectively, occurred at lower powers between ∼100–150 TW. For identical initial target densities and laser illumination, the Ag L-shell is observed to have ≳10× higher emissivity per ion per deposited laser energy than the Ar K-shell. Although such low-density Ag targets have not yet been demonstrated, simulations of targets fabricated using atomic layer deposition of Ag on silica aerogels (∼20% by atomic fraction) suggest similar performance to atomically pure metal foams and that either fabrication technique may be worth pursuing for an efficient 3–5 keV x-ray source on NIF.« less
Misdaq, M A; Aitnouh, F; Khajmi, H; Ezzahery, H; Berrazzouk, S
2001-08-01
A Monte Carlo computer code for determining detection efficiencies of the CR-39 and LR-115 II solid-state nuclear track detectors (SSNTD) for alpha-particles emitted by the uranium and thorium series inside different natural material samples was developed. The influence of the alpha-particle initial energy on the SSNTD detection efficiencies was investigated. Radon (222Rn) and thoron (220Rn) alpha-activities per unit volume were evaluated inside and outside the natural material samples by exploiting data obtained for the detection efficiencies of the SSNTD utilized for the emitted alpha-particles, and measuring the resulting track densities. Results obtained were compared to those obtained by other methods. Radon emanation coefficients have been determined for some of the considered material samples.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
High-current fast electron beam propagation in a dielectric target.
Klimo, Ondrej; Tikhonchuk, V T; Debayle, A
2007-01-01
Recent experiments demonstrate an efficient transformation of high intensity laser pulse into a relativistic electron beam with a very high current density exceeding 10(12) A cm(-2). The propagation of such a beam inside the target is possible if its current is neutralized. This phenomenon is not well understood, especially in dielectric targets. In this paper, we study the propagation of high current density electron beam in a plastic target using a particle-in-cell simulation code. The code includes both ionization of the plastic and collisions of newborn electrons. The numerical results are compared with a relatively simple analytical model and a reasonable agreement is found. The temporal evolution of the beam velocity distribution, the spatial density profile, and the propagation velocity of the ionization front are analyzed and their dependencies on the beam density and energy are discussed. The beam energy losses are mainly due to the target ionization induced by the self-generated electric field and the return current. For the highest beam density, a two-stream instability is observed to develop in the plasma behind the ionization front and it contributes to the beam energy losses.
NASA Astrophysics Data System (ADS)
Solodov, A. A.; Rosenberg, M. J.; Myatt, J. F.; Shaw, J. G.; Seka, W.; Epstein, R.; Short, R. W.; Follett, R. K.; Regan, S. P.; Froula, D. H.; Radha, P. B.; Michel, P.; Chapman, T.; Hohenberger, M.
2017-10-01
Laser-plasma interaction (LPI) instabilities, such as stimulated Raman scattering (SRS) and two-plasmon decay, can be detrimental for direct-drive inertial confinement fusion because of target preheat by the high-energy electrons they generate. The radiation-hydrodynamic code DRACO was used to design planar-target experiments at the National Ignition Facility that generated plasma and interaction conditions relevant to ignition direct-drive designs (IL 1015W/cm2 , Te > 3 keV, density gradient scale lengths of Ln 600 μm). Laser-energy conversion efficiency to hot electrons of 0.5% to 2.5% with temperature of 45 to 60 keV was inferred from the experiment when the laser intensity at the quarter-critical surface increased from 6 to 15 ×1014W/cm2 . LPI was dominated by SRS, as indicated by the measured scattered-light spectra. Simulations of SRS using the LPI code LPSE have been performed and compared with predictions of theoretical models. Implications for ignition-scale direct-drive experiments will be discussed. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Realistic radiative MHD simulation of a solar flare
NASA Astrophysics Data System (ADS)
Rempel, Matthias D.; Cheung, Mark; Chintzoglou, Georgios; Chen, Feng; Testa, Paola; Martinez-Sykora, Juan; Sainz Dalda, Alberto; DeRosa, Marc L.; Viktorovna Malanushenko, Anna; Hansteen, Viggo H.; De Pontieu, Bart; Carlsson, Mats; Gudiksen, Boris; McIntosh, Scott W.
2017-08-01
We present a recently developed version of the MURaM radiative MHD code that includes coronal physics in terms of optically thin radiative loss and field aligned heat conduction. The code employs the "Boris correction" (semi-relativistic MHD with a reduced speed of light) and a hyperbolic treatment of heat conduction, which allow for efficient simulations of the photosphere/corona system by avoiding the severe time-step constraints arising from Alfven wave propagation and heat conduction. We demonstrate that this approach can be used even in dynamic phases such as a flare. We consider a setup in which a flare is triggered by flux emergence into a pre-existing bipolar active region. After the coronal energy release, efficient transport of energy along field lines leads to the formation of flare ribbons within seconds. In the flare ribbons we find downflows for temperatures lower than ~5 MK and upflows at higher temperatures. The resulting soft X-ray emission shows a fast rise and slow decay, reaching a peak corresponding to a mid C-class flare. The post reconnection energy release in the corona leads to average particle energies reaching 50 keV (500 MK under the assumption of a thermal plasma). We show that hard X-ray emission from the corona computed under the assumption of thermal bremsstrahlung can produce a power-law spectrum due to the multi-thermal nature of the plasma. The electron energy flux into the flare ribbons (classic heat conduction with free streaming limit) is highly inhomogeneous and reaches peak values of about 3x1011 erg/cm2/s in a small fraction of the ribbons, indicating regions that could potentially produce hard X-ray footpoint sources. We demonstrate that these findings are robust by comparing simulations computed with different values of the saturation heat flux as well as the "reduced speed of light".
Selecting a Control Strategy for Plug and Process Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobato, C.; Sheppy, M.; Brackney, L.
2012-09-01
Plug and Process Loads (PPLs) are building loads that are not related to general lighting, heating, ventilation, cooling, and water heating, and typically do not provide comfort to the building occupants. PPLs in commercial buildings account for almost 5% of U.S. primary energy consumption. On an individual building level, they account for approximately 25% of the total electrical load in a minimally code-compliant commercial building, and can exceed 50% in an ultra-high efficiency building such as the National Renewable Energy Laboratory's (NREL) Research Support Facility (RSF) (Lobato et al. 2010). Minimizing these loads is a primary challenge in the designmore » and operation of an energy-efficient building. A complex array of technologies that measure and manage PPLs has emerged in the marketplace. Some fall short of manufacturer performance claims, however. NREL has been actively engaged in developing an evaluation and selection process for PPLs control, and is using this process to evaluate a range of technologies for active PPLs management that will cap RSF plug loads. Using a control strategy to match plug load use to users' required job functions is a huge untapped potential for energy savings.« less
Design and development of a chopping and deflecting system for the high current injector at IUAC
NASA Astrophysics Data System (ADS)
Kedia, Sanjay Kumar; Mehta, R.
2018-05-01
The Low Energy Beam Transport (LEBT) section of the High Current Injector (HCI) incorporates a Chopping cum Deflecting System (CDS). The CDS comprises of a deflecting system and a pair of slits that will remove dark current and produce time bunched beam of 60 ns at different repetition rates of 4, 2, 1, 0.5, 0.25 and 0.125 MHz. The distinguishing feature of the design is the use of a multi-plate deflecting structure with low capacitance to optimize the electric field, which in turn results in higher efficiency in terms of achievable ion current. To maximize the effective electric field and its uniformity, the gap between the deflecting plates has been varied and a semi-circular contour has been incorporated on the deflecting plates. Due to this the electric field variation is less than ±0.5% within the plate length. The length of deflecting plates was chosen to maximize the transmission efficiency. Since the velocity of the charged particles in the LEBT section is constant, therefore the separation between two successive sets of deflecting plates has been kept constant to match the ions transient time within the gap which is nearly 32 ns. A square pulse has been chosen, instead of a sinusoidal one, to increase the transmission efficiency and to decrease the tailing effect. The loaded capacitance of the structure was kept <10 pF to achieve fast rise/fall time of the applied voltage signal. A Python code has been developed to verify the various design parameters. The simulation also shows that one can get an efficient deflection of undesired particles resulting in >90% transmission efficiency with in the bunch length. Various simulation codes like Solid Works, TRACE 3D, CST MWS and homebrew Python codes were used to validate the design.
Pressure Mapping and Efficiency Analysis of an EPPLER 857 Hydrokinetic Turbine
NASA Astrophysics Data System (ADS)
Clark, Tristan
A conceptual energy ship is presented to provide renewable energy. The ship, driven by the wind, drags a hydrokinetic turbine through the water. The power generated is used to run electrolysis on board, taking the resultant hydrogen back to shore to be used as an energy source. The basin efficiency (Power/thrust*velocity) of the Hydrokinetic Turbine (HTK) plays a vital role in this process. In order to extract the maximum allowable power from the flow, the blades need to be optimized. The structural analysis of the blade is important, as the blade will undergo high pressure loads from the water. A procedure for analysis of a preliminary Hydrokinetic Turbine blade design is developed. The blade was designed by a non-optimized Blade Element Momentum Theory (BEMT) code. Six simulations were run, with varying mesh resolution, turbulence models, and flow region size. The procedure was developed that provides detailed explanation for the entire process, from geometry and mesh generation to post-processing analysis tools. The efficiency results from the simulations are used to study the mesh resolution, flow region size, and turbulence models. The results are compared to the BEMT model design targets. Static pressure maps are created that can be used for structural analysis of the blades.
NASA Astrophysics Data System (ADS)
Pei, Youbin; Xiang, Nong; Shen, Wei; Hu, Youjun; Todo, Y.; Zhou, Deng; Huang, Juan
2018-05-01
Kinetic-MagnetoHydroDynamic (MHD) hybrid simulations are carried out to study fast ion driven toroidal Alfvén eigenmodes (TAEs) on the Experimental Advanced Superconducting Tokamak (EAST). The first part of this article presents the linear benchmark between two kinetic-MHD codes, namely MEGA and M3D-K, based on a realistic EAST equilibrium. Parameter scans show that the frequency and the growth rate of the TAE given by the two codes agree with each other. The second part of this article discusses the resonance interaction between the TAE and fast ions simulated by the MEGA code. The results show that the TAE exchanges energy with the co-current passing particles with the parallel velocity |v∥ | ≈VA 0/3 or |v∥ | ≈VA 0/5 , where VA 0 is the Alfvén speed on the magnetic axis. The TAE destabilized by the counter-current passing ions is also analyzed and found to have a much smaller growth rate than the co-current ions driven TAE. One of the reasons for this is found to be that the overlapping region of the TAE spatial location and the counter-current ion orbits is narrow, and thus the wave-particle energy exchange is not efficient.
NASA Astrophysics Data System (ADS)
Bagheri, Zahra; Davoudifar, Pantea; Rastegarzadeh, Gohar; Shayan, Milad
2017-03-01
In this paper, we used CORSIKA code to understand the characteristics of cosmic ray induced showers at extremely high energy as a function of energy, detector distance to shower axis, number, and density of secondary charged particles and the nature particle producing the shower. Based on the standard properties of the atmosphere, lateral and longitudinal development of the shower for photons and electrons has been investigated. Fluorescent light has been collected by the detector for protons, helium, oxygen, silicon, calcium and iron primary cosmic rays in different energies. So we have obtained a number of electrons per unit area, distance to the shower axis, shape function of particles density, percentage of fluorescent light, lateral distribution of energy dissipated in the atmosphere and visual field angle of detector as well as size of the shower image. We have also shown that location of highest percentage of fluorescence light is directly proportional to atomic number of elements. Also we have shown when the distance from shower axis increases and the shape function of particles density decreases severely. At the first stages of development, shower axis distance from detector is high and visual field angle is small; then with shower moving toward the Earth, angle increases. Overall, in higher energies, the fluorescent light method has more efficiency. The paper provides standard calibration lines for high energy showers which can be used to determine the nature of the particles.
NASA Astrophysics Data System (ADS)
Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi
2017-01-01
Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.
NASA Astrophysics Data System (ADS)
Kumar, Nitin; Singh, Udaybir; Kumar, Anil; Bhattacharya, Ranajoy; Singh, T. P.; Sinha, A. K.
2013-02-01
The design of 120 GHz, 1 MW gyrotron for plasma fusion application is presented in this paper. The mode selection is carried out considering the aim of minimum mode competition, minimum cavity wall heating, etc. On the basis of the selected operating mode, the interaction cavity design and beam-wave interaction computation are carried out by using the PIC code. The design of triode type Magnetron Injection Gun (MIG) is also presented. Trajectory code EGUN, synthesis code MIGSYN and data analysis code MIGANS are used in the MIG designing. Further, the design of MIG is also validated by using the another trajectory code TRAK. The design results of beam dumping system (collector) and RF window are also presented. Depressed collector is designed to enhance the overall tube efficiency. The design study confirms >1 MW output power with tube efficiency around 50% (with collector efficiency).
Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talou, P.; Chadwick, M. B.; Dietrich, F. S.
2004-01-01
The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004.more » McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.
2011-09-01
This best practices guide is the 15th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.
2011-09-01
This best practices guide is the 16th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less
The energetics of relativistic jets in active galactic nuclei with various kinetic powers
NASA Astrophysics Data System (ADS)
Musoke, Gibwa Rebecca; Young, Andrew; Molnar, Sandor; Birkinshaw, Mark
2018-01-01
Numerical simulations are an important tool in understanding the physical processes behind relativistic jets in active galactic nuclei. In such simulations different combinations of intrinsic jet parameters can be used to obtain the same jet kinetic powers. We present a numerical investigation of the effects of varying the jet power on the dynamic and energetic characteristics of the jets for two kinetic power regimes; in the first regime we change the jet density whilst maintaining a fixed velocity, in the second the jet density is held constant while the velocity is varied. We conduct 2D axisymmetric hydrodynamic simulations of bipolar jets propagating through an isothermal cluster atmosphere using the FLASH MHD code in pure hydrodynamics mode. The jets are simulated with kinetic powers ranging between 1045 and 1046 erg/s and internal Mach numbers ranging from 5.6 to 21.5.As the jets begin to propagate into the intracluster medium (ICM), the injected jet energy is converted into the thermal, kinetic and gravitational potential energy components of the jet cocoon and ICM. We explore the temporal evolution of the partitioning of the injected jet energy into the cocoon and the ICM and quantify the importance of entrainment process on the energy partitioning. We investigate the fraction of injected energy transferred to the thermal energy component of the jet-ICM system in the context of heating the cluster environments, noting that the jets simulated display peak thermalisation efficiencies of least 65% and a marked dependence on the jet density. We compare the efficiencies of the energy partitioning between the cocoon and ICM for the two kinetic power regimes and discuss the resulting efficiency-power scaling relations of each regime.
Energy Savings Analysis of the Proposed Revision of the Washington D.C. Non-Residential Energy Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Athalye, Rahul A.; Hart, Philip R.
This report presents the results of an assessment of savings for the proposed Washington D.C. energy code relative to ASHRAE Standard 90.1-2010. It includes annual and life cycle savings for site energy, source energy, energy cost, and carbon dioxide emissions that would result from adoption and enforcement of the proposed code for newly constructed buildings in Washington D.C. over a five year period.
Rosen, Lisa M.; Liu, Tao; Merchant, Roland C.
2016-01-01
BACKGROUND Blood and body fluid exposures are frequently evaluated in emergency departments (EDs). However, efficient and effective methods for estimating their incidence are not yet established. OBJECTIVE Evaluate the efficiency and accuracy of estimating statewide ED visits for blood or body fluid exposures using International Classification of Diseases, Ninth Revision (ICD-9), code searches. DESIGN Secondary analysis of a database of ED visits for blood or body fluid exposure. SETTING EDs of 11 civilian hospitals throughout Rhode Island from January 1, 1995, through June 30, 2001. PATIENTS Patients presenting to the ED for possible blood or body fluid exposure were included, as determined by prespecified ICD-9 codes. METHODS Positive predictive values (PPVs) were estimated to determine the ability of 10 ICD-9 codes to distinguish ED visits for blood or body fluid exposure from ED visits that were not for blood or body fluid exposure. Recursive partitioning was used to identify an optimal subset of ICD-9 codes for this purpose. Random-effects logistic regression modeling was used to examine variations in ICD-9 coding practices and styles across hospitals. Cluster analysis was used to assess whether the choice of ICD-9 codes was similar across hospitals. RESULTS The PPV for the original 10 ICD-9 codes was 74.4% (95% confidence interval [CI], 73.2%–75.7%), whereas the recursive partitioning analysis identified a subset of 5 ICD-9 codes with a PPV of 89.9% (95% CI, 88.9%–90.8%) and a misclassification rate of 10.1%. The ability, efficiency, and use of the ICD-9 codes to distinguish types of ED visits varied across hospitals. CONCLUSIONS Although an accurate subset of ICD-9 codes could be identified, variations across hospitals related to hospital coding style, efficiency, and accuracy greatly affected estimates of the number of ED visits for blood or body fluid exposure. PMID:22561713
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
Energy-efficient building design in cold climates: Schools as a case study
NASA Astrophysics Data System (ADS)
Rangel Ruiz, Rocio
Buildings account for great amounts of greenhouse gas emissions. In terms of energy, buildings account for one third of the total amount of energy used in the country every year! Schools account for 14 percent of the energy used annually in commercial and institutional buildings. Further, schools are one of the most commonly constructed building types in Canada and spaces such as classrooms are often duplicated. This makes them preferred candidates for the research that was undertaken where energy-efficient solutions that can be transferred to different school designs were derived. Throughout the study, the Commercial Building Incentive Program (CBIP) was used as a benchmark. The objectives of the study were to demonstrate energy-efficient concepts, provide a case study to evaluate solutions, develop typological models and provide an understanding of the innovation process. The technological and societal aspects of the energy-efficient design were addressed. With respect to the technological aspects, the first step was the analysis of conventional design using a school in Calgary as a case study. The optimization of conventional design was undertaken using computer modeling to identify best practice solutions. Aspects that were included in the studies were lighting design, envelope characteristics, HVAC systems and building plant systems. The inclusion of passive design included the analysis of daylighting and natural ventilation. Computer modeling was used to assess daylighting in classrooms with unilateral and bilateral daylighting. Illuminance levels, glare and light distribution were evaluated. The study of natural ventilation was undertaken using literature review. Airflow and outdoor temperatures were the focus to identify solutions that could be incorporated into the design of classrooms. It was concluded that achieving excellence in energy efficiency in schools could be achieved using readily available technologies. Energy savings of up to 63 percent better than Canada's Model National Energy Code for Buildings (MNECB) reference case and utility cost savings of 30,000 (on a 50,000 annual cost) were achieved through conventional design optimization. Additional energy savings of three percent and utility cost savings of $7,000 were seen when passive strategies were included in the design. With respect to the societal aspects, an exploratory research study was undertaken to examine innovation. Architects and energy consultants were interviewed. All design professionals included in the study had participated in projects approved for a grant under CBIP. The purpose of the study was to identify drivers and barriers to energy efficiency. The study demonstrated that external and internal innovation pressures have a significant effect on whether or not the technology is adopted. Suggestions for reducing barriers and further promoting energy efficiency are discussed in this thesis. It is expected that the research will not only aid designers in assessing projects with regard to local priorities, but will also provide building guidelines that serve as tools for the development of the Canadian energy compliance for CO2 emissions.
Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.
2008-01-01
Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.
Optimization of the multi-turn injection efficiency for a medical synchrotron
NASA Astrophysics Data System (ADS)
Kim, J.; Yoon, M.; Yim, H.
2016-09-01
We present a method for optimizing the multi-turn injection efficiency for a medical synchrotron. We show that for a given injection energy, the injection efficiency can be greatly enhanced by choosing transverse tunes appropriately and by optimizing the injection bump and the number of turns required for beam injection. We verify our study by applying the method to the Korea Heavy Ion Medical Accelerator (KHIMA) synchrotron which is currently being built at the campus of Dongnam Institute of Radiological and Medical Sciences (DIRAMS) in Busan, Korea. First the frequency map analysis was performed with the help of the ELEGANT and the ACCSIM codes. The tunes that yielded good injection efficiency were then selected. With these tunes, the injection bump and the number of turns required for injection were then optimized by tracking a number of particles for up to one thousand turns after injection, beyond which no further beam loss occurred. Results for the optimization of the injection efficiency for proton ions are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audin, L.
1994-12-31
EPAct covers a vast territory beyond lighting and, like all legislation, also contains numerous {open_quotes}favors,{close_quotes} compromises, and even some sleight-of-hand. Tucked away under Title XIX, for example, is an increase from 20% to 28% tax on gambling winnings, effective January 1, 1993 - apparently as a way to help pay for new spending listed elsewhere in the bill. Overall, it is a landmark piece of legislation, about a decade overdue. It remains to be seen how the Federal Government will enforce upgrading of state (or even their own) energy codes. There is no mention of funding for {open_quotes}energy police{close_quotes} inmore » EPAct. Merely creating such a national standard, however, provides a target for those who sincerely wish to create an energy-efficient future.« less
Ledermüller, Katrin; Schütz, Martin
2014-04-28
A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest.
Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection
NASA Astrophysics Data System (ADS)
Alejandro Munoz Sepulveda, Patricio
2017-04-01
The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.
PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments
NASA Astrophysics Data System (ADS)
Gaede, F.; Hegner, B.; Mato, P.
2017-10-01
PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.
Heat Transfer and Fluid Dynamics Measurements in the Expansion Space of a Stirling Cycle Engine
NASA Technical Reports Server (NTRS)
Jiang, Nan; Simon, Terrence W.
2006-01-01
The heater (or acceptor) of a Stirling engine, where most of the thermal energy is accepted into the engine by heat transfer, is the hottest part of the engine. Almost as hot is the adjacent expansion space of the engine. In the expansion space, the flow is oscillatory, impinging on a two-dimensional concavely-curved surface. Knowing the heat transfer on the inside surface of the engine head is critical to the engine design for efficiency and reliability. However, the flow in this region is not well understood and support is required to develop the CFD codes needed to design modern Stirling engines of high efficiency and power output. The present project is to experimentally investigate the flow and heat transfer in the heater head region. Flow fields and heat transfer coefficients are measured to characterize the oscillatory flow as well as to supply experimental validation for the CFD Stirling engine design codes. Presented also is a discussion of how these results might be used for heater head and acceptor region design calculations.
NASA Astrophysics Data System (ADS)
Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan
2018-01-01
The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.
Genetic code expansion for multiprotein complex engineering.
Koehler, Christine; Sauter, Paul F; Wawryszyn, Mirella; Girona, Gemma Estrada; Gupta, Kapil; Landry, Jonathan J M; Fritz, Markus Hsi-Yang; Radic, Ksenija; Hoffmann, Jan-Erik; Chen, Zhuo A; Zou, Juan; Tan, Piau Siong; Galik, Bence; Junttila, Sini; Stolt-Bergner, Peggy; Pruneri, Giancarlo; Gyenesei, Attila; Schultz, Carsten; Biskup, Moritz Bosse; Besir, Hueseyin; Benes, Vladimir; Rappsilber, Juri; Jechlinger, Martin; Korbel, Jan O; Berger, Imre; Braese, Stefan; Lemke, Edward A
2016-12-01
We present a baculovirus-based protein engineering method that enables site-specific introduction of unique functionalities in a eukaryotic protein complex recombinantly produced in insect cells. We demonstrate the versatility of this efficient and robust protein production platform, 'MultiBacTAG', (i) for the fluorescent labeling of target proteins and biologics using click chemistries, (ii) for glycoengineering of antibodies, and (iii) for structure-function studies of novel eukaryotic complexes using single-molecule Förster resonance energy transfer as well as site-specific crosslinking strategies.
2017-06-05
DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response...control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND...THIS PAGE 19b. TELEPHONE NUMBER (include area code) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Cost & Performance Report June 2017
A perspective of laminar-flow control. [aircraft energy efficiency program
NASA Technical Reports Server (NTRS)
Braslow, A. L.; Muraca, R. J.
1978-01-01
A historical review of the development of laminar flow control technology is presented with reference to active laminar boundary-layer control through suction, the use of multiple suction slots, wind-tunnel tests, continuous suction, and spanwise contamination. The ACEE laminar flow control program is outlined noting the development of three-dimensional boundary-layer codes, cruise-noise prediction techniques, airfoil development, and leading-edge region cleaning. Attention is given to glove flight tests and the fabrication and testing of wing box designs.
Classification of breast tissue in mammograms using efficient coding.
Costa, Daniel D; Campos, Lúcio F; Barros, Allan K
2011-06-24
Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.
NASA Astrophysics Data System (ADS)
Basiri, H.; Tavakoli-Anbaran, H.
2018-01-01
Am-Be neutrons source is based on (α, n) reaction and generates neutrons in the energy range of 0-11 MeV. Since the thermal neutrons are widely used in different fields, in this work, we investigate how to improve the source configuration in order to increase the thermal flux. These suggested changes include a spherical moderator instead of common cylindrical geometry, a reflector layer and an appropriate materials selection in order to achieve the maximum thermal flux. All calculations were done by using MCNP1 Monte Carlo code. Our final results indicated that a spherical paraffin moderator, a layer of beryllium as a reflector can efficiently increase the thermal neutron flux of Am-Be source.
Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.
Ruymgaart, A Peter; Elber, Ron
2012-11-13
We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).
A generalized chemistry version of SPARK
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.
1988-01-01
An extension of the reacting H2-air computer code SPARK is presented, which enables the code to be used on any reacting flow problem. Routines are developed calculating in a general fashion, the reaction rates, and chemical Jacobians of any reacting system. In addition, an equilibrium routine is added so that the code will have frozen, finite rate, and equilibrium capabilities. The reaction rate for the species is determined from the law of mass action using Arrhenius expressions for the rate constants. The Jacobian routines are determined by numerically or analytically differentiating the law of mass action for each species. The equilibrium routine is based on a Gibbs free energy minimization routine. The routines are written in FORTRAN 77, with special consideration given to vectorization. Run times for the generalized routines are generally 20 percent slower than reaction specific routines. The numerical efficiency of the generalized analytical Jacobian, however, is nearly 300 percent better than the reaction specific numerical Jacobian used in SPARK.
Particle-in-cell simulations with charge-conserving current deposition on graphic processing units
NASA Astrophysics Data System (ADS)
Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren
2011-10-01
Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.
Validation of a Computational Fluid Dynamics (CFD) Code for Supersonic Axisymmetric Base Flow
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin
1993-01-01
The ability to accurately and efficiently calculate the flow structure in the base region of bodies of revolution in supersonic flight is a significant step in CFD code validation for applications ranging from base heating for rockets to drag for protectives. The FDNS code is used to compute such a flow and the results are compared to benchmark quality experimental data. Flowfield calculations are presented for a cylindrical afterbody at M = 2.46 and angle of attack a = O. Grid independent solutions are compared to mean velocity profiles in the separated wake area and downstream of the reattachment point. Additionally, quantities such as turbulent kinetic energy and shear layer growth rates are compared to the data. Finally, the computed base pressures are compared to the measured values. An effort is made to elucidate the role of turbulence models in the flowfield predictions. The level of turbulent eddy viscosity, and its origin, are used to contrast the various turbulence models and compare the results to the experimental data.
NASA Lewis Stirling engine computer code evaluation
NASA Technical Reports Server (NTRS)
Sullivan, Timothy J.
1989-01-01
In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.
3DHZETRN: Inhomogeneous Geometry Issues
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.
2017-01-01
Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Naderi, F.
1982-01-01
The nature of the optical/microwave interface aboard the relay satellite is considered. To allow for the maximum system flexibility, without overburdening either the optical or RF channel, demodulating the optical on board the relay satellite but leaving the optical channel decoding to be performed at the ground station is examined. The occurrence of erasures in the optical channel is treated. A hard decision on the erasure (i.e., the relay selecting a symbol at random in case of erasure occurrence) seriously degrades the performance of the overall system. Coding the erasure occurrences at the relay and transmitting this information via an extra bit to the ground station where it can be used by the decoder is suggested. Many examples with varying bit/photon energy efficiency and for the noisy and noiseless optical channel are considered. It is shown that coding the erasure occurrences dramatically improves the performance of the cascaded channel relative to the case of hard decision on the erasure by the relay.
NASA Astrophysics Data System (ADS)
Bencheikh, Mohamed; Maghnouj, Abdelmajid; Tajmouati, Jaouad
2017-11-01
The Monte Carlo calculation method is considered to be the most accurate method for dose calculation in radiotherapy and beam characterization investigation, in this study, the Varian Clinac 2100 medical linear accelerator with and without flattening filter (FF) was modelled. The objective of this study was to determine flattening filter impact on particles' energy properties at phantom surface in terms of energy fluence, mean energy, and energy fluence distribution. The Monte Carlo codes used in this study were BEAMnrc code for simulating linac head, DOSXYZnrc code for simulating the absorbed dose in a water phantom, and BEAMDP for extracting energy properties. Field size was 10 × 10 cm2, simulated photon beam energy was 6 MV and SSD was 100 cm. The Monte Carlo geometry was validated by a gamma index acceptance rate of 99% in PDD and 98% in dose profiles, gamma criteria was 3% for dose difference and 3mm for distance to agreement. In without-FF, the energetic properties was as following: electron contribution was increased by more than 300% in energy fluence, almost 14% in mean energy and 1900% in energy fluence distribution, however, photon contribution was increased 50% in energy fluence, and almost 18% in mean energy and almost 35% in energy fluence distribution. The removing flattening filter promotes the increasing of electron contamination energy versus photon energy; our study can contribute in the evolution of removing flattening filter configuration in future linac.
Evaluation of Savings in Energy-Efficient Public Housing in the Pacific Northwest
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-10-01
This report presents the results of an energy performance and cost-effectiveness analysis. The Salishan phase 7 and demonstration homes were compared to Salishan phase 6 homes built to 2006 Washington State Energy Code specifications 2. Predicted annual energy savings (over Salishan phase 6) was 19% for Salishan phase 7, and between 19-24% for the demonstration homes (depending on ventilation strategy). Approximately two-thirds of the savings are attributable to the DHP. Working with the electric utility provider, Tacoma Public Utilities, researchers conducted a billing analysis for Salishan phase 7. Median energy use for the development is 11,000 kWh; annual energy costsmore » are $780, with a fair amount of variation dependent on size of home. Preliminary analysis of savings between Salishan 7 and previous phases (4 through 6) suggest savings of between 20 and 30 percent. A more comprehensive comparison between Salishan 7 and previous phases will take place in year two of this project.« less
Evaluation of Savings in Energy-Efficient Public Housing in the Pacific Northwest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, A.; Lubliner, M.; Howard, L.
2013-10-01
This report presents the results of an energy performance and cost-effectiveness analysis. The Salishan phase 7 and demonstration homes were compared to Salishan phase 6 homes built to 2006 Washington State Energy Code specifications 2. Predicted annual energy savings (over Salishan phase 6) was 19% for Salishan phase 7, and between 19-24% for the demonstration homes (depending on ventilationstrategy). Approximately two-thirds of the savings are attributable to the DHP. Working with the electric utility provider, Tacoma Public Utilities, researchers conducted a billing analysis for Salishan phase 7. Median energy use for the development is 11,000 kWh; annual energy costs aremore » $780, with a fair amount of variation dependent on size of home. Preliminary analysis of savings betweenSalishan 7 and previous phases (4 through 6) suggest savings of between 20 and 30 percent. A more comprehensive comparison between Salishan 7 and previous phases will take place in year two of this project.« less
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
Energy Referencing in LANL HE-EOS Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leiding, Jeffery Allen; Coe, Joshua Damon
2017-10-19
Here, We briefly describe the choice of energy referencing in LANL's HE-EOS codes, HEOS and MAGPIE. Understanding this is essential to comparing energies produced by different EOS codes, as well as to the correct calculation of shock Hugoniots of HEs and other materials. In all equations after (3) throughout this report, all energies, enthalpies and volumes are assumed to be molar quantities.
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
Green buildings for Egypt: a call for an integrated policy
NASA Astrophysics Data System (ADS)
Bampou, P.
2017-11-01
As global warming is on the threshold of each country worldwide, Middle East and North African (MENA) region has already adopted energy efficiency (EE) policies on several consuming sectors. The present paper valuates the impact of temperature increase in the residential building sector of Egypt that is the most integrated example of the 7 out of the 20 MENA countries that have started their green efforts upon building environment. Furthermore, as it is based on a literature research upon socio-economic characteristics, existing building stock, existing legal and institutional framework, it elaborates a quantitative evaluation of Egypt's energy-saving potential, outlining basic constraints upon energy conservation, in order for Egypt to be able to handle the high energy needs due to its warm climate. Last but not least, the paper proposes a policy pathway for the implementation of green building codes and concludes with the best available technologies to promote EE in the Egyptian building sector.
Environmental impacts of anaerobic digestion and the use of anaerobic residues as soil amendment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosey, F.E.
1996-01-01
This paper defines the environmental role of anaerobic digestion within the overall objective of recovering energy from renewable biomass resources. Examples and opportunities for incorporating anaerobic digestion into biomass-to-energy schemes are discussed, together with environmental aspects of anaerobic digestion plants. These include visual, public amenity, pathogens and public health, odor control, and gaseous emissions. Digestate disposal and the benefits of restrictions on recycling organic wastes and biomass residues back to the land are discussed, particularly as they relate to American and European codes of practice and environmental legislation. The paper concludes that anaerobic digestion, if performed in purpose-designed reactors thatmore » efficiently recover and use biogas, is an environmentally benign process that can enhance energy recovery and aid the beneficial land use of plant residues in many biomass-to-energy schemes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun
Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generatormore » based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westover, B.; Lawrence Livermore National Laboratory, Livermore, California 94550; Chen, C. D.
2014-03-15
Experiments on the Titan laser (∼150 J, 0.7 ps, 2 × 10{sup 20} W cm{sup −2}) at the Lawrence Livermore National Laboratory were carried out in order to study the properties of fast electrons produced by high-intensity, short pulse laser interacting with matter under conditions relevant to Fast Ignition. Bremsstrahlung x-rays produced by these fast electrons were measured by a set of compact filter-stack based x-ray detectors placed at three angles with respect to the target. The measured bremsstrahlung signal allows a characterization of the fast electron beam spectrum, conversion efficiency of laser energy into fast electron kinetic energy and angular distribution. A Monte Carlo codemore » Integrated Tiger Series was used to model the bremsstrahlung signal and infer a laser to fast electron conversion efficiency of 30%, an electron slope temperature of about 2.2 MeV, and a mean divergence angle of 39°. Simulations were also performed with the hybrid transport code ZUMA which includes fields in the target. In this case, a conversion efficiency of laser energy to fast electron energy of 34% and a slope temperature between 1.5 MeV and 4 MeV depending on the angle between the target normal direction and the measuring spectrometer are found. The observed temperature of the bremsstrahlung spectrum, and therefore the inferred electron spectrum are found to be angle dependent.« less
Design and spectrum calculation of 4H-SiC thermal neutron detectors using FLUKA and TCAD
NASA Astrophysics Data System (ADS)
Huang, Haili; Tang, Xiaoyan; Guo, Hui; Zhang, Yimen; Zhang, Yimeng; Zhang, Yuming
2016-10-01
SiC is a promising material for neutron detection in a harsh environment due to its wide band gap, high displacement threshold energy and high thermal conductivity. To increase the detection efficiency of SiC, a converter such as 6LiF or 10B is introduced. In this paper, pulse-height spectra of a PIN diode with a 6LiF conversion layer exposed to thermal neutrons (0.026 eV) are calculated using TCAD and Monte Carlo simulations. First, the conversion efficiency of a thermal neutron with respect to the thickness of 6LiF was calculated by using a FLUKA code, and a maximal efficiency of approximately 5% was achieved. Next, the energy distributions of both 3H and α induced by the 6LiF reaction according to different ranges of emission angle are analyzed. Subsequently, transient pulses generated by the bombardment of single 3H or α-particles are calculated. Finally, pulse height spectra are obtained with a detector efficiency of 4.53%. Comparisons of the simulated result with the experimental data are also presented, and the calculated spectrum shows an acceptable similarity to the experimental data. This work would be useful for radiation-sensing applications, especially for SiC detector design.
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Design of 9.271-pressure-ratio 5-stage core compressor and overall performance for first 3 stages
NASA Technical Reports Server (NTRS)
Steinke, Ronald J.
1986-01-01
Overall aerodynamic design information is given for all five stages of an axial flow core compressor (74A) having a 9.271 pressure ratio and 29.710 kg/sec flow. For the inlet stage group (first three stages), detailed blade element design information and experimental overall performance are given. At rotor 1 inlet tip speed was 430.291 m/sec, and hub to tip radius ratio was 0.488. A low number of blades per row was achieved by the use of low-aspect-ratio blading of moderate solidity. The high reaction stages have about equal energy addition. Radial energy varied to give constant total pressure at the rotor exit. The blade element profile and shock losses and the incidence and deviation angles were based on relevant experimental data. Blade shapes are mostly double circular arc. Analysis by a three-dimensional Euler code verified the experimentally measured high flow at design speed and IGV-stator setting angles. An optimization code gave an optimal IGV-stator reset schedule for higher measured efficiency at all speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Building simulations are increasingly used in various applications related to energy efficient buildings. For individual buildings, applications include: design of new buildings, prediction of retrofit savings, ratings, performance path code compliance and qualification for incentives. Beyond individual building applications, larger scale applications (across the stock of buildings at various scales: national, regional and state) include: codes and standards development, utility program design, regional/state planning, and technology assessments. For these sorts of applications, a set of representative buildings are typically simulated to predict performance of the entire population of buildings. Focusing on the U.S. single-family residential building stock, this paper willmore » describe how multiple data sources for building characteristics are combined into a highly-granular database that preserves the important interdependencies of the characteristics. We will present the sampling technique used to generate a representative set of thousands (up to hundreds of thousands) of building models. We will also present results of detailed calibrations against building stock consumption data.« less
History of one family of atmospheric radiative transfer codes
NASA Astrophysics Data System (ADS)
Anderson, Gail P.; Wang, Jinxue; Hoke, Michael L.; Kneizys, F. X.; Chetwynd, James H., Jr.; Rothman, Laurence S.; Kimball, L. M.; McClatchey, Robert A.; Shettle, Eric P.; Clough, Shepard (.; Gallery, William O.; Abreu, Leonard W.; Selby, John E. A.
1994-12-01
Beginning in the early 1970's, the then Air Force Cambridge Research Laboratory initiated a program to develop computer-based atmospheric radiative transfer algorithms. The first attempts were translations of graphical procedures described in a 1970 report on The Optical Properties of the Atmosphere, based on empirical transmission functions and effective absorption coefficients derived primarily from controlled laboratory transmittance measurements. The fact that spectrally-averaged atmospheric transmittance (T) does not obey the Beer-Lambert Law (T equals exp(-(sigma) (DOT)(eta) ), where (sigma) is a species absorption cross section, independent of (eta) , the species column amount along the path) at any but the finest spectral resolution was already well known. Band models to describe this gross behavior were developed in the 1950's and 60's. Thus began LOWTRAN, the Low Resolution Transmittance Code, first released in 1972. This limited initial effort has how progressed to a set of codes and related algorithms (including line-of-sight spectral geometry, direct and scattered radiance and irradiance, non-local thermodynamic equilibrium, etc.) that contain thousands of coding lines, hundreds of subroutines, and improved accuracy, efficiency, and, ultimately, accessibility. This review will include LOWTRAN, HITRAN (atlas of high-resolution molecular spectroscopic data), FASCODE (Fast Atmospheric Signature Code), and MODTRAN (Moderate Resolution Transmittance Code), their permutations, validations, and applications, particularly as related to passive remote sensing and energy deposition.
Utilizing Spectrum Efficiently (USE)
2011-02-28
18 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and...numerical results. 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and Spectral Efficiency In [60] multiple...supported at a given signal-to-interference ratio in asynchronous direct-sequence code-division multiple-access ( DS - CDMA ) sys- tems was examined. It was
Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.
2013-12-01
Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.
Bar Coding and Tracking in Pathology.
Hanna, Matthew G; Pantanowitz, Liron
2016-03-01
Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kumar, Santosh; Chanderkanta; Amphawan, Angela
2016-04-01
Excess 3 code is one of the most important codes used for efficient data storage and transmission. It is a non-weighted code and also known as self complimenting code. In this paper, a four bit optical Excess 3 to BCD code converter is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
Bar Coding and Tracking in Pathology.
Hanna, Matthew G; Pantanowitz, Liron
2015-06-01
Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2015 Elsevier Inc. All rights reserved.
Simulation of neutron production using MCNPX+MCUNED.
Erhard, M; Sauvan, P; Nolte, R
2014-10-01
In standard MCNPX, the production of neutrons by ions cannot be modelled efficiently. The MCUNED patch applied to MCNPX 2.7.0 allows to model the production of neutrons by light ions down to energies of a few kiloelectron volts. This is crucial for the simulation of neutron reference fields. The influence of target properties, such as the diffusion of reactive isotopes into the target backing or the effect of energy and angular straggling, can be studied efficiently. In this work, MCNPX/MCUNED calculations are compared with results obtained with the TARGET code for simulating neutron production. Furthermore, MCUNED incorporates more effective variance reduction techniques and a coincidence counting tally. This allows the simulation of a TCAP experiment being developed at PTB. In this experiment, 14.7-MeV neutrons will be produced by the reaction T(d,n)(4)He. The neutron fluence is determined by counting alpha particles, independently of the reaction cross section. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.
NASA Astrophysics Data System (ADS)
KIM, Jong Woon; LEE, Young-Ouk
2017-09-01
As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
Simulation of radiation energy release in air showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glaser, Christian; Erdmann, Martin; Hörandel, Jörg R.
2016-09-01
A simulation study of the energy released by extensive air showers in the form of MHz radiation is performed using the CoREAS simulation code. We develop an efficient method to extract this radiation energy from air-shower simulations. We determine the longitudinal profile of the radiation energy release and compare it to the longitudinal profile of the energy deposit by the electromagnetic component of the air shower. We find that the radiation energy corrected for the geometric dependence of the geomagnetic emission scales quadratically with the energy in the electromagnetic component of the air shower with a second-order dependence on themore » atmospheric density at the position of the maximum shower development X {sub max}. In a measurement where X {sub max} is not accessible, this second order dependence can be approximated using the zenith angle of the incoming direction of the air shower with only a minor loss in accuracy. Our method results in an intrinsic uncertainty of 4% in the determination of the energy in the electromagnetic air-shower component, which is well below current experimental uncertainties.« less
Theoretical Thermal Evaluation of Energy Recovery Incinerators
1985-12-01
Army Logistics Mgt Center, Fort Lee , VA DTIC Alexandria, VA DTNSRDC Code 4111 (R. Gierich), Bethesda MD; Code 4120, Annapolis, MD; Code 522 (Library...Washington. DC: Code (I6H4. Washington. DC NAVSECGRUACT PWO (Code .’^O.’^). Winter Harbor. IVIE ; PWO (Code 4(1). Edzell. Scotland; PWO. Adak AK...NEW YORK Fort Schuyler. NY (Longobardi) TEXAS A&M UNIVERSITY W.B. Ledbetter College Station. TX UNIVERSITY OF CALIFORNIA Energy Engineer. Davis CA
Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M
2004-10-01
The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.
NASA Technical Reports Server (NTRS)
Lin, Shu (Principal Investigator); Uehara, Gregory T.; Nakamura, Eric; Chu, Cecilia W. P.
1996-01-01
The (64, 40, 8) subcode of the third-order Reed-Muller (RM) code for high-speed satellite communications is proposed. The RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. The progress made toward achieving the goal of implementing a decoder system based upon this code is summarized. The development of the integrated circuit prototype sub-trellis IC, particularly focusing on the design methodology, is addressed.
Sun, Xiaojing; Brown, Marilyn A.; Cox, Matt; ...
2015-03-11
This paper provides a global overview of the design, implementation, and evolution of building energy codes. Reflecting alternative policy goals, building energy codes differ significantly across the United States, the European Union, and China. This review uncovers numerous innovative practices including greenhouse gas emissions caps per square meter of building space, energy performance certificates with retrofit recommendations, and inclusion of renewable energy to achieve “nearly zero-energy buildings”. These innovations motivated an assessment of an aggressive commercial building code applied to all US states, requiring both new construction and buildings with major modifications to comply with the latest version of themore » ASHRAE 90.1 Standards. Using the National Energy Modeling System (NEMS), we estimate that by 2035, such building codes in the United States could reduce energy for space heating, cooling, water heating and lighting in commercial buildings by 16%, 15%, 20% and 5%, respectively. Impacts on different fuels and building types, energy rates and bills as well as pollution emission reductions are also examined.« less
Fusion energy with lasers, direct drive targets, and dry wall chambers
NASA Astrophysics Data System (ADS)
Sethian, J. D.; Friedman, M.; Lehmberg, R. H.; Myers, M.; Obenschain, S. P.; Giuliani, J.; Kepple, P.; Schmitt, A. J.; Colombant, D.; Gardner, J.; Hegeler, F.; Wolford, M.; Swanekamp, S. B.; Weidenheimer, D.; Welch, D.; Rose, D.; Payne, S.; Bibeau, C.; Baraymian, A.; Beach, R.; Schaffers, K.; Freitas, B.; Skulina, K.; Meier, W.; Latkowski, J.; Perkins, L. J.; Goodin, D.; Petzoldt, R.; Stephens, E.; Najmabadi, F.; Tillack, M.; Raffray, R.; Dragojlovic, Z.; Haynes, D.; Peterson, R.; Kulcinski, G.; Hoffer, J.; Geller, D.; Schroen, D.; Streit, J.; Olson, C.; Tanaka, T.; Renk, T.; Rochau, G.; Snead, L.; Ghoneim, N.; Lucas, G.
2003-12-01
A coordinated, focused effort is underway to develop Laser Inertial Fusion Energy. The key components are developed in concert with one another and the science and engineering issues are addressed concurrently. Recent advances include: target designs have been evaluated that show it could be possible to achieve the high gains (>100) needed for a practical fusion system.These designs feature a low-density CH foam that is wicked with solid DT and over-coated with a thin high-Z layer. These results have been verified with three independent one-dimensional codes, and are now being evaluated with two- and three-dimensional codes. Two types of lasers are under development: Krypton Fluoride (KrF) gas lasers and Diode Pumped Solid State Lasers (DPSSL). Both have recently achieved repetitive 'first light', and both have made progress in meeting the fusion energy requirements for durability, efficiency, and cost. This paper also presents the advances in development of chamber operating windows (target survival plus no wall erosion), final optics (aluminium at grazing incidence has high reflectivity and exceeds the required laser damage threshold), target fabrication (demonstration of smooth DT ice layers grown over foams, batch production of foam shells, and appropriate high-Z overcoats), and target injection (new facility for target injection and tracking studies).
NASA Astrophysics Data System (ADS)
Andre, R.; Carlsson, J.; Gorelenkova, M.; Jardin, S.; Kaye, S.; Poli, F.; Yuan, X.
2016-10-01
TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT- SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP incorporates high fidelity heating and current drive source models, such as NUBEAM for neutral beam injection, the beam tracing code TORBEAM for EC, TORIC for ICRF, the ray tracing TORAY and GENRAY for EC. The implementation of selected components makes efficient use of MPI for speed up of code calculations. Recently the GENRAY-CQL3D solver for modeling of LH heating and current drive has been implemented and currently being extended to multiple antennas, to allow modeling of EAST discharges. Also, GENRAY+CQL3D is being extended to the use of EC/EBW and of HHFW for NSTX-U. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Work supported by the US Department of Energy under DE-AC02-CH0911466.
Addressing the challenges of standalone multi-core simulations in molecular dynamics
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-07-01
Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.
NASA Astrophysics Data System (ADS)
da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.
2014-05-01
Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
The random energy model in a magnetic field and joint source channel coding
NASA Astrophysics Data System (ADS)
Merhav, Neri
2008-09-01
We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.
NASA Astrophysics Data System (ADS)
Wang, Yue; Shen, Xiao-Liang; Zheng, Rui-Lin; Guo, Hai-Tao; Lv, Peng; Liu, Chun-Xiao
2018-01-01
Ion implantation has demonstrated to be an efficient and reliable technique for the fabrication of optical waveguides in a diversity of transparent materials. Photo-thermal-refractive glass (PTR) is considered to be durable and stable holographic recording medium. Optical planar waveguide structures in the PTR glasses were formed, for the first time to our knowledge, by the C3+-ion implantation with single-energy (6.0 MeV) and double-energy (5.5+6.0 MeV), respectively. The process of the carbon ion implantation was simulated by the stopping and range of ions in matter code. The morphologies of the waveguides were recorded by a microscope operating in transmission mode. The guided beam distributions of the waveguides were measured by the end-face coupling technique. Comparing with the single-energy implantation, the double-energy implantation improves the light confinement for the dark-mode spectrum. The guiding properties suggest that the carbon-implanted PTR glass waveguides have potential for the manufacture of photonic devices.
Elastic and inelastic scattering of neutrons from 56Fe
NASA Astrophysics Data System (ADS)
Ramirez, Anthony Paul; McEllistrem, M. T.; Liu, S. H.; Mukhopadhyay, S.; Peters, E. E.; Yates, S. W.; Vanhoy, J. R.; Harrison, T. D.; Rice, B. G.; Thompson, B. K.; Hicks, S. F.; Howard, T. J.; Jackson, D. T.; Lenzen, P. D.; Nguyen, T. D.; Pecha, R. L.
2015-10-01
The differential cross sections for elastic and inelastic scattered neutrons from 56Fe have been measured at the University of Kentucky Accelerator Laboratory (www.pa.uky.edu/accelerator) for incident neutron energies between 2.0 and 8.0 MeV and for the angular range 30° to 150°. Time-of-flight techniques and pulse-shape discrimination were employed for enhancing the neutron energy spectra and for reducing background. An overview of the experimental procedures and data analysis for the conversion of neutron yields to differential cross sections will be presented. These include the determination of the energy-dependent detection efficiencies, the normalization of the measured differential cross sections, and the attenuation and multiple scattering corrections. Our results will also be compared to evaluated cross section databases and reaction model calculations using the TALYS code. This work is supported by grants from the U.S. Department of Energy-Nuclear Energy Universities Program: NU-12-KY-UK-0201-05, and the Donald A. Cowan Physics Institute at the University of Dallas.
NASA Astrophysics Data System (ADS)
Gorelenkov, Nikolai; Duarte, Vinicius; Podesta, Mario
2017-10-01
The performance of the burning plasma can be limited by the requirements to confine the superalfvenic fusion products which are capable of resonating with the Alfvénic eigenmodes (AEs). The effect of AEs on fast ions is evaluated using the quasi-linear approach [Berk et al., Ph.Plasmas'96] generalized for this problem recently [Duarte et al., Ph.D.'17]. The generalization involves the resonance line broadened interaction regions with the diffusion coefficient prescribed to find the evolution of the velocity distribution function. The baseline eigenmode structures are found using the NOVA-K code perturbatively [Gorelenkov et al., Ph.Plasmas'99]. A RBQ1D code allowing the diffusion in radial direction is presented here. The wave particle interaction can be reduced to one-dimensional dynamics where for the Alfvénic modes typically the particle kinetic energy is nearly constant. Hence to a good approximation the Quasi-Linear (QL) diffusion equation only contains derivatives in the angular momentum. The diffusion equation is then one dimensional that is efficiently solved simultaneously for all particles with the equation for the evolution of the wave angular momentum. The RBQ1D is validated against recent DIIID results [Collins et al., PRL'16]. Supported by the US Department of Energy under DE-AC02-09CH11466.
Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability
NASA Astrophysics Data System (ADS)
Guruvareddiar, Palanivel; Joseph, Biju K.
2014-03-01
Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-10
.... The IgCC is intended to provide a green model building code provisions for new and existing commercial... DEPARTMENT OF ENERGY 10 CFR Part 430 [Docket No. EERE-2011-BT-BC-0009] Building Energy Codes Program: Presenting and Receiving Comments to DOE Proposed Changes to the International Green Construction...
Development of new methodologies for evaluating the energy performance of new commercial buildings
NASA Astrophysics Data System (ADS)
Song, Suwon
The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.
Massively parallel multicanonical simulations
NASA Astrophysics Data System (ADS)
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
Software Tools for Stochastic Simulations of Turbulence
2015-08-28
client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF
Cai, Yao; Hu, Huasi; Pan, Ziheng; Hu, Guang; Zhang, Tao
2018-05-17
To optimize the shield for neutrons and gamma rays compact and lightweight, a method combining the structure and components together was established employing genetic algorithms and MCNP code. As a typical case, the fission energy spectrum of 235 U which mixed neutrons and gamma rays was adopted in this study. Six types of materials were presented and optimized by the method. Spherical geometry was adopted in the optimization after checking the geometry effect. Simulations have made to verify the reliability of the optimization method and the efficiency of the optimized materials. To compare the materials visually and conveniently, the volume and weight needed to build a shield are employed. The results showed that, the composite multilayer material has the best performance. Copyright © 2018 Elsevier Ltd. All rights reserved.
A digital communications system for manned spaceflight applications.
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.
1973-01-01
A highly efficient, all-digital communications signal design employing convolutional coding and PN spectrum spreading is described for two-way transmission of voice and data between a manned spacecraft and ground. Variable-slope delta modulation is selected for analog/digital conversion of the voice signal, and a convolutional decoder utilizing the Viterbi decoding algorithm is selected for use at each receiving terminal. A PN spread spectrum technique is implemented to protect against multipath effects and to reduce the energy density (per unit bandwidth) impinging on the earth's surface to a value within the guidelines adopted by international agreement. Performance predictions are presented for transmission via a TDRS (tracking and data relay satellite) system and for direct transmission between the spacecraft and earth. Hardware estimates are provided for a flight-qualified communications system employing the coded digital signal design.
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks
NASA Astrophysics Data System (ADS)
Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu
Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.
The US Department of Energy’s most recent commercial energy code compliance evaluation efforts focused on determining a percent compliance rating for states to help them meet requirements under the American Recovery and Reinvestment Act (ARRA) of 2009. That approach included a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. With its binary approach to compliance determination, the previous methodology failed to answer some important questions. In particular, how much energy cost could be saved by better compliance with the commercial energymore » code and what are the relative priorities of code requirements from an energy cost savings perspective? This paper explores an analytical approach and pilot study using a single building type and climate zone to answer those questions.« less
Extending Moore's Law via Computationally Error Tolerant Computing.
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...
2018-03-01
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Extending Moore's Law via Computationally Error Tolerant Computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
A Lightweight Protocol for Secure Video Streaming
Morkevicius, Nerijus; Bagdonas, Kazimieras
2018-01-01
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing “Fog Node-End Device” layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard. PMID:29757988
A Lightweight Protocol for Secure Video Streaming.
Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis
2018-05-14
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.
Dynamic Geospatial Modeling of the Building Stock to Project Urban Energy Demand.
Breunig, Hanna Marie; Huntington, Tyler; Jin, Ling; Robinson, Alastair; Scown, Corinne Donahue
2018-06-26
In the United States, buildings account for more than 40 percent of total energy consumption, and the evolution of the urban form will impact the effectiveness of strategies to reduce energy use and mitigate emissions. This paper presents a broadly applicable approach for modeling future commercial, residential, and industrial floorspace, thermal consumption (heating and cooling), and associated GHG emissions at the tax assessor land parcel level. The approach accounts for changing building standards and retrofitting, climate change, and trends in housing and industry. We demonstrate the automated workflow for California, and project building stock, thermal energy consumption, and associated GHG emissions out to 2050. Our results suggest that if buildings in California have long lifespans, and minimal energy efficiency improvements compared to building codes reflective of 2008, then the state will face a 20% or higher increase in thermal energy consumption by 2050. Baseline annual GHG emissions associated with thermal energy consumption in the modeled building stock in 2016 is 34% below 1990 levels (110 Mt CO2eq/y).While the 2020 targets for the reduction of GHG emissions set by the California Senate Bill 350 have already been met, none of our scenarios achieve >80% reduction from 1990 levels by 2050, despite assuming an 86% reduction in electricity carbon intensity in our "Low Carbon" scenario. The results highlight the challenge California faces in meeting its new energy efficiency targets unless the State's building stock undergoes timely and strategic turnover, paired with deep retrofitting of existing buildings and natural gas equipment.
Application of multigrid methods to the solution of liquid crystal equations on a SIMD computer
NASA Technical Reports Server (NTRS)
Farrell, Paul A.; Ruttan, Arden; Zeller, Reinhardt R.
1993-01-01
We will describe a finite difference code for computing the equilibrium configurations of the order-parameter tensor field for nematic liquid crystals in rectangular regions by minimization of the Landau-de Gennes Free Energy functional. The implementation of the free energy functional described here includes magnetic fields, quadratic gradient terms, and scalar bulk terms through the fourth order. Boundary conditions include the effects of strong surface anchoring. The target architectures for our implementation are SIMD machines, with interconnection networks which can be configured as 2 or 3 dimensional grids, such as the Wavetracer DTC. We also discuss the relative efficiency of a number of iterative methods for the solution of the linear systems arising from this discretization on such architectures.
DQE simulation of a-Se x-ray detectors using ARTEMIS
NASA Astrophysics Data System (ADS)
Fang, Yuan; Badano, Aldo
2016-03-01
Detective Quantum Efficiency (DQE) is one of the most important image quality metrics for evaluating the spatial resolution performance of flat-panel x-ray detectors. In this work, we simulate the DQE of amorphous selenium (a-Se) xray detectors with a detailed Monte Carlo transport code (ARTEMIS) for modeling semiconductor-based direct x-ray detectors. The transport of electron-hole pairs is achieved with a spatiotemporal model that accounts for recombination and trapping of carriers and Coulombic effects of space charge and external applied electric field. A range of x-ray energies has been simulated from 10 to 100 keV. The DQE results can be used to study the spatial resolution characteristics of detectors at different energies.
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Papadakis, Michael
2005-01-01
Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Iowa State University – Final Report for SciDAC3/NUCLEI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vary, James P
The Iowa State University (ISU) contributions to the NUCLEI project are focused on developing, implementing and running an efficient and scalable configuration interaction code (Many-Fermion Dynamics – nuclear or MFDn) for leadership class supercomputers addressing forefront research problems in low-energy nuclear physics. We investigate nuclear structure and reactions with realistic nucleon-nucleon (NN) and three-nucleon (3N) interactions. We select a few highlights from our work that has produced a total of more than 82 refereed publications and more than 109 invited talks under SciDAC3/NUCLEI.
Volkov basis for simulation of interaction of strong laser pulses and solids
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Li, Yonghui; Varga, Kálmán
2018-01-01
An efficient and accurate basis comprised of Volkov states is implemented and tested for time-dependent simulations of interactions between strong laser pulses and crystalline solids. The Volkov states are eigenstates of the free electron Hamiltonian in an electromagnetic field and analytically represent the rapidly oscillating time-dependence of the orbitals, allowing significantly faster time propagation than conventional approaches. The Volkov approach can be readily implemented in plane-wave codes by multiplying the potential energy matrix elements with a simple time-dependent phase factor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
Building Standards and Codes for Energy Conservation
ERIC Educational Resources Information Center
Gross, James G.; Pierlert, James H.
1977-01-01
Current activity intended to lead to energy conservation measures in building codes and standards is reviewed by members of the Office of Building Standards and Codes Services of the National Bureau of Standards. For journal availability see HE 508 931. (LBH)
WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley; Michelen, Carlos; Bosma, Bret
2016-08-01
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.
Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
Synaptic E-I Balance Underlies Efficient Neural Coding.
Zhou, Shanglin; Yu, Yuguo
2018-01-01
Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.
Synaptic E-I Balance Underlies Efficient Neural Coding
Zhou, Shanglin; Yu, Yuguo
2018-01-01
Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding. PMID:29456491