Opportunities for Computational Discovery in Basic Energy Sciences
NASA Astrophysics Data System (ADS)
Pederson, Mark
2011-03-01
An overview of the broad-ranging support of computational physics and computational science within the Department of Energy Office of Science will be provided. Computation as the third branch of physics is supported by all six offices (Advanced Scientific Computing, Basic Energy, Biological and Environmental, Fusion Energy, High-Energy Physics, and Nuclear Physics). Support focuses on hardware, software and applications. Most opportunities within the fields of~condensed-matter physics, chemical-physics and materials sciences are supported by the Officeof Basic Energy Science (BES) or through partnerships between BES and the Office for Advanced Scientific Computing. Activities include radiation sciences, catalysis, combustion, materials in extreme environments, energy-storage materials, light-harvesting and photovoltaics, solid-state lighting and superconductivity.~ A summary of two recent reports by the computational materials and chemical communities on the role of computation during the next decade will be provided. ~In addition to materials and chemistry challenges specific to energy sciences, issues identified~include a focus on the role of the domain scientist in integrating, expanding and sustaining applications-oriented capabilities on evolving high-performance computing platforms and on the role of computation in accelerating the development of innovative technologies. ~~
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
Fermilab | Science at Fermilab | Experiments & Projects | Intensity
Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator
Collider Physics Cosmic Frontier Cosmic Frontier Theory & Computing Detector R&D Electronic Design Theory Seminar Argonne >High Energy Physics Cosmic Frontier Theory & Computing Homepage General Cosmic Frontier Theory & Computing Group led the analysis to begin mapping dark matter. There have
Computing in high-energy physics
Mount, Richard P.
2016-05-31
I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.
Computing in high-energy physics
NASA Astrophysics Data System (ADS)
Mount, Richard P.
2016-04-01
I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.
Computing in high-energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mount, Richard P.
I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.
America COMPETES Act and the FY2010 Budget
2009-06-29
Outstanding Junior Investigator, Fusion Energy Sciences Plasma Physics Junior Faculty Development; Advanced Scientific Computing Research Early Career...the Fusion Energy Sciences Graduate Fellowships.2 If members of Congress agree with this contention, these America COMPETES Act programs were...Physics Outstanding Junior Investigator, Fusion Energy Sciences Plasma Physics Junior Faculty Development; Advanced Scientific Computing Research Early
On-line computer system for use with low- energy nuclear physics experiments is reported
NASA Technical Reports Server (NTRS)
Gemmell, D. S.
1969-01-01
Computer program handles data from low-energy nuclear physics experiments which utilize the ND-160 pulse-height analyzer and the PHYLIS computing system. The program allows experimenters to choose from about 50 different basic data-handling functions and to prescribe the order in which these functions will be performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hules, John
This 1998 annual report from the National Scientific Energy Research Computing Center (NERSC) presents the year in review of the following categories: Computational Science; Computer Science and Applied Mathematics; and Systems and Services. Also presented are science highlights in the following categories: Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Sciences; High Energy and Nuclear Physics; and Advanced Scientific Computing Research and Other Projects.
Parallel Computing:. Some Activities in High Energy Physics
NASA Astrophysics Data System (ADS)
Willers, Ian
This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.
Scholarly literature and the press: scientific impact and social perception of physics computing
NASA Astrophysics Data System (ADS)
Pia, M. G.; Basaglia, T.; Bell, Z. W.; Dressendorfer, P. V.
2014-06-01
The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the relationship between the scientific impact and the social perception of HEP physics research versus that of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing via press releases from the major HEP laboratories would be beneficial to the high energy physics community.
America COMPETES Act and the FY2010 Budget
2009-06-15
Outstanding Junior Investigator, Nuclear Physics Outstanding Junior Investigator, Fusion Energy Sciences Plasma Physics Junior Faculty Development...Spallation Neutron Source Instrumentation Fellowships, and the Fusion Energy Sciences Graduate Fellowships.2 If members of Congress agree with this...Nuclear Physics Outstanding Junior Investigator, Fusion Energy Sciences Plasma Physics Junior Faculty Development; Advanced Scientific Computing
Untitled Document [Argonne Logo] [DOE Logo] High Energy Physics Home Division ES&H Personnel Collider Physics Cosmic Frontier Cosmic Frontier Theory & Computing Detector R&D Electronic Design Mechanical Design Neutrino Physics Theoretical Physics Seminars HEP Division Seminar HEP Lunch Seminar HEP
Reversibility and energy dissipation in adiabatic superconductor logic.
Takeuchi, Naoki; Yamanashi, Yuki; Yoshikawa, Nobuyuki
2017-03-06
Reversible computing is considered to be a key technology to achieve an extremely high energy efficiency in future computers. In this study, we investigated the relationship between reversibility and energy dissipation in adiabatic superconductor logic. We analyzed the evolution of phase differences of Josephson junctions in the reversible quantum-flux-parametron (RQFP) gate and confirmed that the phase differences can change time reversibly, which indicates that the RQFP gate is physically, as well as logically, reversible. We calculated energy dissipation required for the RQFP gate to perform a logic operation and numerically demonstrated that the energy dissipation can fall below the thermal limit, or the Landauer bound, by lowering operation frequencies. We also investigated the 1-bit-erasure gate as a logically irreversible gate and the quasi-RQFP gate as a physically irreversible gate. We calculated the energy dissipation of these irreversible gates and showed that the energy dissipation of these gate is dominated by non-adiabatic state changes, which are induced by unwanted interactions between gates due to logical or physical irreversibility. Our results show that, in reversible computing using adiabatic superconductor logic, logical and physical reversibility are required to achieve energy dissipation smaller than the Landauer bound without non-adiabatic processes caused by gate interactions.
Proposed Projects and Experiments Fermilab's Tevatron Questions for the Universe Theory Computing High Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator Science in Medicine Follow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farbin, Amir
2015-07-15
This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhihong
Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
Lattice QCD Calculations in Nuclear Physics towards the Exascale
NASA Astrophysics Data System (ADS)
Joo, Balint
2017-01-01
The combination of algorithmic advances and new highly parallel computing architectures are enabling lattice QCD calculations to tackle ever more complex problems in nuclear physics. In this talk I will review some computational challenges that are encountered in large scale cold nuclear physics campaigns such as those in hadron spectroscopy calculations. I will discuss progress in addressing these with algorithmic improvements such as multi-grid solvers and software for recent hardware architectures such as GPUs and Intel Xeon Phi, Knights Landing. Finally, I will highlight some current topics for research and development as we head towards the Exascale era This material is funded by the U.S. Department of Energy, Office Of Science, Offices of Nuclear Physics, High Energy Physics and Advanced Scientific Computing Research, as well as the Office of Nuclear Physics under contract DE-AC05-06OR23177.
Computational Accelerator Physics. Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisognano, J.J.; Mondelli, A.A.
1997-04-01
The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Barnes, D. C.
2013-01-01
We describe the extension of the recent charge- and energy-conserving one-dimensional electrostatic particle-in-cell algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036] to mapped (body-fitted) computational meshes. The approach maintains exact charge and energy conservation properties. Key to the algorithm is a hybrid push, where particle positions are updated in logical space, while velocities are updated in physical space. The effectiveness of the approach is demonstrated with a challenging numerical test case, the ion acoustic shock wave. The generalization of the approach to multiple dimensions is outlined.
BigData and computing challenges in high energy and nuclear physics
NASA Astrophysics Data System (ADS)
Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.
2017-06-01
In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X
Hamiltonian lattice field theory: Computer calculations using variational methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zako, Robert L.
1991-12-03
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less
Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo
2018-06-08
Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.
Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael; Boldyrev, Stanislav; Chang, Choong-Seock
This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less
Nuclear Physics Laboratory 1979 annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adelberger, E.G.
1979-07-01
Research progress is reported in the following areas: astrophysics and cosmology, fundamental symmetries, nuclear structure, radiative capture, medium energy physics, heavy ion reactions, research by users and visitors, accelerator and ion source development, instrumentation and experimental techniques, and computers and computing. Publications are listed. (WHK)
Performance Modeling of Experimental Laser Lightcrafts
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.; Turner, Jim (Technical Monitor)
2001-01-01
A computational plasma aerodynamics model is developed to study the performance of a laser propelled Lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure-based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibrium thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literatures. The predicted coupling coefficients for the Lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.
Validating an operational physical method to compute surface radiation from geostationary satellites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Dhere, Neelkanth G.; Wohlgemuth, John H.
We developed models to compute global horizontal irradiance (GHI) and direct normal irradiance (DNI) over the last three decades. These models can be classified as empirical or physical based on the approach. Empirical models relate ground-based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the physics behind the radiation received at the satellite and create retrievals to estimate surface radiation. Furthermore, while empirical methods have been traditionally used for computing surface radiation for the solar energy industry, the advent of faster computing has made operational physical models viable. The Global Solar Insolation Projectmore » (GSIP) is a physical model that computes DNI and GHI using the visible and infrared channel measurements from a weather satellite. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate GHI and DNI. Developed for polar orbiting satellites, GSIP has been adapted to NOAA's Geostationary Operation Environmental Satellite series and can run operationally at high spatial resolutions. Our method holds the possibility of creating high quality datasets of GHI and DNI for use by the solar energy industry. We present an outline of the methodology and results from running the model as well as a validation study using ground-based instruments.« less
Beam and Plasma Physics Research
1990-06-01
La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23
NASA Astrophysics Data System (ADS)
Wang, Jianxiong
2014-06-01
This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF
Structures and Statistics of Citation Networks
2011-05-01
the citations among them. The papers are in the field of high- energy physics, and they were added to the online library between 1992-2003. Each paper... energy , physics:astrophysics, mathematics, computer science, statistics and many others. The value of the setSpec field can be any of these. However...the value of the categories field might contain multiple set names listed. For instance, a paper can primarily be considered as a high- energy physics
Computational and Physical Analysis of Catalytic Compounds
NASA Astrophysics Data System (ADS)
Wu, Richard; Sohn, Jung Jae; Kyung, Richard
2015-03-01
Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.
Nuclear Computational Low Energy Initiative (NUCLEI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Sanjay K.
This is the final report for University of Washington for the NUCLEI SciDAC-3. The NUCLEI -project, as defined by the scope of work, will develop, implement and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics to be studied include the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques to be used include Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program will emphasize areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS andmore » FRIB (nuclear structure and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrino-less double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less
Performance Modeling of an Experimental Laser Propelled Lightcraft
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.
2000-01-01
A computational plasma aerodynamics model is developed to study the performance of an experimental laser propelled lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure- based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibn'um thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and equi refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literature. The predicted coupling coefficients for the lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.
PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'07)
NASA Astrophysics Data System (ADS)
Sobie, Randall; Tafirout, Reda; Thomson, Jana
2007-07-01
The 2007 International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 2-7 September 2007 in Victoria, British Columbia, Canada. CHEP is a major series of international conferences for physicists and computing professionals from the High Energy and Nuclear Physics community, Computer Science and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing, and future activities. The CHEP'07 conference had close to 500 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising oral and poster presentations, and an industrial exhibition. Conference tracks covered topics in Online Computing, Event Processing, Software Components, Tools and Databases, Software Tools and Information Systems, Computing Facilities, Production Grids and Networking, Grid Middleware and Tools, Distributed Data Analysis and Information Management and Collaborative Tools. The conference included a successful whale-watching excursion involving over 200 participants and a banquet at the Royal British Columbia Museum. The next CHEP conference will be held in Prague in March 2009. We would like thank the sponsors of the conference and the staff at the TRIUMF Laboratory and the University of Victoria who made the CHEP'07 a success. Randall Sobie and Reda Tafirout CHEP'07 Conference Chairs
Computer Model Of Fragmentation Of Atomic Nuclei
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.
1995-01-01
High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.
Analyzing high energy physics data using database computing: Preliminary report
NASA Technical Reports Server (NTRS)
Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry
1991-01-01
A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quirk, W.J.; Canada, J.; de Vore, L.
1994-04-01
This issue highlights the Lawrence Livermore National Laboratory`s 1993 accomplishments in our mission areas and core programs: economic competitiveness, national security, energy, the environment, lasers, biology and biotechnology, engineering, physics, chemistry, materials science, computers and computing, and science and math education. Secondary topics include: nonproliferation, arms control, international security, environmental remediation, and waste management.
QUARTERLY PROGRESS REPORT NO. 83,
Topics included are: microwave spectroscopy; radio astronomy; solid-state microwave electronics; optical and infrared spectroscopy; physical electronics and surface physics; physical acoustics; plasma physics; gaseous electronics; plasmas and controlled nuclear fusion ; energy conversion research; statistical communication theory; linguistics; cognitive information processing; communications biophysics; neurophysiology; computation research.
Computer implemented empirical mode decomposition method, apparatus and article of manufacture
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
1999-01-01
A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch
2016-07-21
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This methodmore » allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.« less
Epstein, Leonard H; Roemmich, James N; Robinson, Jodie L; Paluch, Rocco A; Winiewicz, Dana D; Fuerch, Janene H; Robinson, Thomas N
2008-03-01
To assess the effects of reducing television viewing and computer use on children's body mass index (BMI) as a risk factor for the development of overweight in young children. Randomized controlled clinical trial. University children's hospital. Seventy children aged 4 to 7 years whose BMI was at or above the 75th BMI percentile for age and sex. Children were randomized to an intervention to reduce their television viewing and computer use by 50% vs a monitoring control group that did not reduce television viewing or computer use. Age- and sex-standardized BMI (zBMI), television viewing, energy intake, and physical activity were monitored every 6 months during 2 years. Children randomized to the intervention group showed greater reductions in targeted sedentary behavior (P < .001), zBMI (P < .05), and energy intake (P < .05) compared with the monitoring control group. Socioeconomic status moderated zBMI change (P = .01), with the experimental intervention working better among families of low socioeconomic status. Changes in targeted sedentary behavior mediated changes in zBMI (P < .05). The change in television viewing was related to the change in energy intake (P < .001) but not to the change in physical activity (P =.37). Reducing television viewing and computer use may have an important role in preventing obesity and in lowering BMI in young children, and these changes may be related more to changes in energy intake than to changes in physical activity.
HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics
NASA Astrophysics Data System (ADS)
Wiebusch, Martin
2015-10-01
This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.
Cheung, Kei Long; Schwabe, Inga; Walthouwer, Michel J. L.; Oenema, Anke; de Vries, Hein
2017-01-01
Computer-tailored programs may help to prevent overweight and obesity, which are worldwide public health problems. This study investigated (1) the 12-month effectiveness of a video- and text-based computer-tailored intervention on energy intake, physical activity, and body mass index (BMI), and (2) the role of educational level in intervention effects. A randomized controlled trial in The Netherlands was conducted, in which adults were allocated to a video-based condition, text-based condition, or control condition, with baseline, 6 months, and 12 months follow-up. Outcome variables were self-reported BMI, physical activity, and energy intake. Mixed-effects modelling was used to investigate intervention effects and potential interaction effects. Compared to the control group, the video intervention group was effective regarding energy intake after 6 months (least squares means (LSM) difference = −205.40, p = 0.00) and 12 months (LSM difference = −128.14, p = 0.03). Only video intervention resulted in lower average daily energy intake after one year (d = 0.12). Educational role and BMI did not seem to interact with this effect. No intervention effects on BMI and physical activity were found. The video computer-tailored intervention was effective on energy intake after one year. This effect was not dependent on educational levels or BMI categories, suggesting that video tailoring can be effective for a broad range of risk groups and may be preferred over text tailoring. PMID:29065545
Bryce, Richard A
2011-04-01
The ability to accurately predict the interaction of a ligand with its receptor is a key limitation in computer-aided drug design approaches such as virtual screening and de novo design. In this article, we examine current strategies for a physics-based approach to scoring of protein-ligand affinity, as well as outlining recent developments in force fields and quantum chemical techniques. We also consider advances in the development and application of simulation-based free energy methods to study protein-ligand interactions. Fuelled by recent advances in computational algorithms and hardware, there is the opportunity for increased integration of physics-based scoring approaches at earlier stages in computationally guided drug discovery. Specifically, we envisage increased use of implicit solvent models and simulation-based scoring methods as tools for computing the affinities of large virtual ligand libraries. Approaches based on end point simulations and reference potentials allow the application of more advanced potential energy functions to prediction of protein-ligand binding affinities. Comprehensive evaluation of polarizable force fields and quantum mechanical (QM)/molecular mechanical and QM methods in scoring of protein-ligand interactions is required, particularly in their ability to address challenging targets such as metalloproteins and other proteins that make highly polar interactions. Finally, we anticipate increasingly quantitative free energy perturbation and thermodynamic integration methods that are practical for optimization of hits obtained from screened ligand libraries.
NASA Astrophysics Data System (ADS)
Frew, E.; Argrow, B. M.; Houston, A. L.; Weiss, C.
2014-12-01
The energy-aware airborne dynamic, data-driven application system (EA-DDDAS) performs persistent sampling in complex atmospheric conditions by exploiting wind energy using the dynamic data-driven application system paradigm. The main challenge for future airborne sampling missions is operation with tight integration of physical and computational resources over wireless communication networks, in complex atmospheric conditions. The physical resources considered here include sensor platforms, particularly mobile Doppler radar and unmanned aircraft, the complex conditions in which they operate, and the region of interest. Autonomous operation requires distributed computational effort connected by layered wireless communication. Onboard decision-making and coordination algorithms can be enhanced by atmospheric models that assimilate input from physics-based models and wind fields derived from multiple sources. These models are generally too complex to be run onboard the aircraft, so they need to be executed in ground vehicles in the field, and connected over broadband or other wireless links back to the field. Finally, the wind field environment drives strong interaction between the computational and physical systems, both as a challenge to autonomous path planning algorithms and as a novel energy source that can be exploited to improve system range and endurance. Implementation details of a complete EA-DDDAS will be provided, along with preliminary flight test results targeting coherent boundary-layer structures.
Proceedings of the nineteenth LAMPF Users Group meeting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradbury, J.N.
1986-02-01
Separate abstracts were prepared for eight invited talks on various aspects of nuclear and particle physics as well as status reports on LAMPF and discussions of upgrade options. Also included in these proceedings are the minutes of the working groups for: energetic pion channel and spectrometer; high resolution spectrometer; high energy pion channel; neutron facilities; low-energy pion work; nucleon physics laboratory; stopped muon physics; solid state physics and material science; nuclear chemistry; and computing facilities. Recent LAMPF proposals are also briefly summarized. (LEW)
ERIC Educational Resources Information Center
School Science Review, 1990
1990-01-01
Included are 30 science activities that include computer monitoring, fieldwork, enzyme activity, pH, drugs, calorimeters, Raoult's Law, food content, solubility, electrochemistry, titration, physical properties of materials, gel filtration, energy, concepts in physics, and electricity. (KR)
Fermilab | Science at Fermilab | Experiments & Projects | Cosmic Frontier
Proposed Projects and Experiments Fermilab's Tevatron Questions for the Universe Theory Computing High Answers Submit a Question Frontiers of Particle Physics Benefits to Society Benefits to Society Medicine Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library
Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald
2018-06-13
Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.
[Physical activity in a probabilistic sample in the city of Rio de Janeiro].
Gomes, V B; Siqueira, K S; Sichieri, R
2001-01-01
This study evaluated physical activity in a probabilistic sample of 4,331 individuals 12 years of age and older residing in the city of Rio de Janeiro, who participated in a household survey in 1996. Occupation and leisure activity were grouped according to categories of energy expenditure. The study also evaluated number of hours watching TV, using the computer, or playing video-games. Only 3.6% of males and 0.3% of females reported heavy occupational work. A full 59.8% of males and 77.8% of females reported never performing recreational physical activity, and there was an increase in this prevalence with age, especially for men. Women's leisure activities involved less energy expenditure and had a lower median duration than those of men. Mean daily TV/video/computer time was greater for women than for men. The greater the level of schooling, the higher the frequency of physical activity for both sexes. Analyzed jointly, these data show the low energy expenditure through physical activity by the population of the city of Rio de Janeiro. Women, the middle-aged, the elderly, and low-income individuals were at greatest risk of not performing recreational physical activity.
Exascale computing and what it means for shock physics
NASA Astrophysics Data System (ADS)
Germann, Timothy
2015-06-01
The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert
Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3)more » Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; LeCompte, Tom
2015-10-29
Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3)more » Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.« less
NASA Technical Reports Server (NTRS)
Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)
2003-01-01
A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)
NASA Astrophysics Data System (ADS)
Troparevsky, Claudia; Stocks, George Malcolm
2012-12-01
Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc. We are grateful to the committees that helped put the conference together, especially the local organizing committee. Particular thanks are also due to a number of ORNL staff who spent long hours with the administrative details. We are pleased to express our thanks to the conference administrator Ann Strange (ORNL/CDP) for her responsive and efficient day-to-day handling of this event, Sherry Samples, Assistant Conference Administrator (ORNL), Angie Beach and the ORNL Conference Office, and Shirley Shugart (ORNL) and Fern Stooksbury (ORNL) who created and maintained the conference website. Editors: G Malcolm Stocks (ORNL) and M Claudia Troparevsky (UT) http://ccp2011.ornl.gov Chair: Dr Malcolm Stocks (ORNL) Vice Chairs: Adriana Moreo (ORNL/UT) James Guberrnatis (LANL) Local Program Committee: Don Batchelor (ORNL) Jack Dongarra (UTK/ORNL) James Hack (ORNL) Robert Harrison (ORNL) Paul Kent (ORNL) Anthony Mezzacappa (ORNL) Adriana Moreo (ORNL) Witold Nazarewicz (UT) Loukas Petridis (ORNL) David Schultz (ORNL) Bill Shelton (ORNL) Claudia Troparevsky (ORNL) Mina Yoon (ORNL) International Advisory Board Members: Joan Adler (Israel Institute of Technology, Israel) Constantia Alexandrou (University of Cyprus, Cyprus) Claudia Ambrosch-Draxl (University of Leoben, Austria) Amanda Barnard (CSIRO, Australia) Peter Borcherds (University of Birmingham, UK) Klaus Cappelle (UFABC, Brazil) Giovanni Ciccotti (Università degli Studi di Roma 'La Sapienza', Italy) Nithaya Chetty (University of Pretoria, South Africa) Charlotte Froese-Fischer (NIST, US) Giulia A. Galli (University of California, Davis, US) Gillian Gehring (University of Sheffield, UK) Guang-Yu Guo (National Taiwan University, Taiwan) Sharon Hammes-Schiffer (Penn State, US) Alex Hansen (Norweigan UST) Duane D. Johnson (University of Illinois at Urbana-Champaign, US) David Landau (University of Georgia, US) Joaquin Marro (University of Granada, Spain) Richard Martin (UIUC, US) Todd Martinez (Stanford University, US) Bill McCurdy (Lawrence Berkeley National Laboratory, US) Ingrid Mertig (Martin Luther University, Germany) Alejandro Muramatsu (Universitat Stuttgart, Germany) Richard Needs (Cavendish Laboratory, UK) Giuseppina Orlandini (University of Trento, Italy) Martin Savage (University of Washington, US) Thomas Schulthess (ETH, Switzerland) Dzidka Szotek (Daresbury Laboratory, UK) Hideaki Takabe (Osaka University, Japan) William M. Tang (Princeton University, US) James Vary (Iowa State, US) Enge Wang (Chinese Academy of Science, China) Jian-Guo Wang (Institute of Applied Physics and Computational Mathematics, China) Jian-Sheng Wang (National University, Singapore) Dan Wei (Tsinghua University, China) Tony Williams (University of Adelaide, Australia) Rudy Zeller (Julich, Germany) Conference Administrator: Ann Strange (ORNL)
Cyber physical systems based on cloud computing and internet of things for energy efficiency
NASA Astrophysics Data System (ADS)
Suciu, George; Butca, Cristina; Suciu, Victor; Cretu, Alexandru; Fratu, Octavian
2016-12-01
Cyber Physical Systems (CPS) and energy efficiency play a major role in the context of industry expansion. Management practices for improving efficiency in the field of energy consumption became a priority of many major industries who are inefficient in terms of exploitation costs. The effort of adopting energy management means in an organization is quite challenging due to the lack of resources and expertise. One major problem consists in the lack of knowledge for energy management and practices. This paper aims to present authors' concept in creating a Cyber Physical Energy System (CPES) that will change organizations' way of consuming energy, by making them aware of their use. The presented concept will consider the security of the whole system and the easy integration with the existing electric network infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-03-01
Abstracts of papers published during the previous calendar year, arranged in accordance with the project titles used in the USDOE Schedule 189 Budget Proposals, are presented. The collection of abstracts supplements the listing of papers published in the Schedule 189. The following subject areas are represented: high-energy physics; nuclear physics; basic energy sciences (nuclear science, materials sciences, solid state physics, materials chemistry); molecular, mathematical, and earth sciences (fundamental interactions, processes and techniques, mathematical and computer sciences); environmental research and development; physical and technological studies (characterization, measurement and monitoring); and nuclear research and applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taleei, R; Qin, N; Jiang, S
2016-06-15
Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relativemore » Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.« less
The energy expenditure of using a "walk-and-work" desk for office workers with obesity.
Levine, James A; Miller, Jennifer M
2007-09-01
For many people, most of the working day is spent sitting in front of a computer screen. Approaches for obesity treatment and prevention are being sought to increase workplace physical activity because low levels of physical activity are associated with obesity. Our hypothesis was that a vertical workstation that allows an obese individual to work while walking would be associated with significant and substantial increases in energy expenditure over seated work. The vertical workstation is a workstation that allows an office worker to use a standard personal computer while walking on a treadmill at a self-selected velocity. 15 sedentary individuals with obesity (14 women, one man; 43 (7.5) years, 86 (9.6) kg; body mass index 32 (2.6) kg/m(2)) underwent measurements of energy expenditure at rest, seated working in an office chair, standing and while walking at a self-selected speed using the vertical workstation. Body composition was measured using dual x ray absorptiometry. The mean (SD) energy expenditure while seated at work in an office chair was 72 (10) kcal/h, whereas the energy expenditure while walking and working at a self-selected velocity of 1.1 (0.4) mph was 191 (29) kcal/h. The mean (SD) increase in energy expenditure for walking-and-working over sitting was 119 (25) kcal/h. If sitting computer-time were replaced by walking-and-working, energy expenditure could increase by 100 kcal/h. Thus, if obese individuals were to replace time spent sitting at the computer with walking computer time by 2-3 h/day, and if other components of energy balance were constant, a weight loss of 20-30 kg/year could occur.
How Data Becomes Physics: Inside the RACF
Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris
2018-06-22
The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energyâs (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Boldyrev, Stanislav; Fischer, Paul
This report details the impact exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the DOE applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought together experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.
Science Photo of person viewing 3D visualization of a wind turbine The NREL Computational Science challenges in fields ranging from condensed matter physics and nonlinear dynamics to computational fluid dynamics. NREL is also home to the most energy-efficient data center in the world, featuring Peregrine-the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lizcano, D., E-mail: david.lizcano@udima.es, E-mail: mariaaurora.martinez@udima.es; Martínez, A. María, E-mail: david.lizcano@udima.es, E-mail: mariaaurora.martinez@udima.es
Edward Fredkin was an enthusiastic advocate of information-based theoretical physics, who, in the early 1980s, proposed a new theory of physics based on the idea that the universe is ultimately composed of software. According to Fredkin, reality should be considered as being composed not of particles, matter and forces or energy but of bits of data or information modified according to computational rules. Fredkin went on to demonstrate that, while energy is necessary for storing and retrieving information, it can be arbitrarily reduced in order to carry out any particular instance of information processing, and this operation does not havemore » a lower bound. This implies that it is information rather than matter or energy that should be considered at the ultimate fundamental constituent of reality. This possibility had already been suggested by other scientists. Norbert Wiener heralded a fundamental shift from energy to information and suggested that the universe was founded essentially on the transformation of information, not energy. However, Konrad Zuse was the first, back in 1967, to defend the idea that a digital computer is computing the universe. Richard P. Feynman showed this possibility in a similar light in his reflections on how information related to matter and energy. Other pioneering research on the theory of digital physics was published by Kantor in 1977 and more recently by Stephen Wolfram in 2002, who thereby joined the host of voices upholding that it is patterns of information, not matter and energy, that constitute the cornerstones of reality. In this paper, we introduce the use of knowledge management tools for the purpose of analysing this topic.« less
Siegel, Marilyn J; Kaza, Ravi K; Bolus, David N; Boll, Daniel T; Rofsky, Neil M; De Cecco, Carlo N; Foley, W Dennis; Morgan, Desiree E; Schoepf, U Joseph; Sahani, Dushyant V; Shuman, William P; Vrtiska, Terri J; Yeh, Benjamin M; Berland, Lincoln L
This is the first of a series of 4 white papers that represent Expert Consensus Documents developed by the Society of Computed Body Tomography and Magnetic Resonance through its task force on dual-energy computed tomography (DECT). This article, part 1, describes the fundamentals of the physical basis for DECT and the technology of DECT and proposes uniform nomenclature to account for differences in proprietary terms among manufacturers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.; Yu, G.; Wang, K.
The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecturemore » achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)« less
NASA Technical Reports Server (NTRS)
Larson, V. H.
1982-01-01
The basic equations that are used to describe the physical phenomena in a Stirling cycle engine are the general energy equations and equations for the conservation of mass and conversion of momentum. These equations, together with the equation of state, an analytical expression for the gas velocity, and an equation for mesh temperature are used in this computer study of Stirling cycle characteristics. The partial differential equations describing the physical phenomena that occurs in a Stirling cycle engine are of the hyperbolic type. The hyperbolic equations have real characteristic lines. By utilizing appropriate points along these curved lines the partial differential equations can be reduced to ordinary differential equations. These equations are solved numerically using a fourth-fifth order Runge-Kutta integration technique.
PRaVDA: High Energy Physics towards proton Computed Tomography
NASA Astrophysics Data System (ADS)
Price, T.; PRaVDA Consortium
2016-07-01
Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, R. Navarro; Schunck, N.; Lasseri, R.
2017-03-09
HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the nuclear energy Density Functional Theory (DFT), where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton densities. In HFBTHO, the energy density derives either from the zero-range Dkyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear superfluidity is treated at the Hartree-Fock-Bogoliubov (HFB) approximation, and axial-symmetry of the nuclear shape is assumed. This version is the 3rd release ofmore » the program; the two previous versions were published in Computer Physics Communications [1,2]. The previous version was released at LLNL under GPL 3 Open Source License and was given release code LLNL-CODE-573953.« less
Analytical study of laser-supported combustion waves in hydrogen
NASA Technical Reports Server (NTRS)
Kemp, N. H.; Root, R. G.
1978-01-01
Laser supported combustion (LSC) waves are an important ingredient in the fluid mechanics of CW laser propulsion using a hydrogen propellant and 10.6 micron lasers. Therefore, a computer model has been constructed to solve the one-dimensional energy equation with constant pressure and area. Physical processes considered include convection, conduction, absorption of laser energy, radiation energy loss, and accurate properties of equilibrium hydrogen. Calculations for 1, 3, 10 and 30 atm were made for intensities of 10 to the 4th to 10 to the 6th W/sq cm, which gave temperature profiles, wave speed, etc. To pursue the propulsion application, a second computer model was developed to describe the acceleration of the gas emerging from the LSC wave into a variable-pressure, converging streamtube, still including all the above-mentioned physical processes. The results show very high temperatures in LSC waves which absorb all the laser energy, and high radiative losses.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
NASA Astrophysics Data System (ADS)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-01
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less
High energy physics at UC Riverside
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-07-01
This report discusses progress made for the following two tasks: experimental high energy physics, Task A, and theoretical high energy physics, Task B. Task A1 covers hadron collider physics. Information for Task A1 includes: personnel/talks/publications; D0: proton-antiproton interactions at 2 TeV; SDC: proton-proton interactions at 40 TeV; computing facilities; equipment needs; and budget notes. The physics program of Task A2 has been the systematic study of leptons and hadrons. Information covered for Task A2 includes: personnel/talks/publications; OPAL at LEP; OPAL at LEP200; CMS at LHC; the RD5 experiment; LSND at LAMPF; and budget notes. The research activities of the Theorymore » Group are briefly discussed and a list of completed or published papers for this period is given.« less
UTDallas Offline Computing System for B Physics with the Babar Experiment at SLAC
NASA Astrophysics Data System (ADS)
Benninger, Tracy L.
1998-10-01
The University of Texas at Dallas High Energy Physics group is building a high performance, large storage computing system for B physics research with the BaBar experiment (``factory'') at the Stanford Linear Accelerator Center. The goal of this system is to analyze one terabyte of complex Event Store data from BaBar in one to two days. The foundation of the computing system is a Sun E6000 Enterprise multiprocessor system, with additions of a Sun StorEdge L1800 Tape Library, a Sun Workstation for processing batch jobs, staging disks and interface cards. The design considerations, current status, projects underway, and possible upgrade paths will be discussed.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1983-01-01
Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Argonne National Laboratory High Energy Physics Division Windows Desktops Problem Report Service Request Password Help New Users Back to HEP Computing Email on ANL Exchange: See Windows Clients section (Outlook or Thunderbird recommended) Web Browsers: Web Browsers for Windows Desktops Software: Available
DOE pushes for useful quantum computing
NASA Astrophysics Data System (ADS)
Cho, Adrian
2018-01-01
The U.S. Department of Energy (DOE) is joining the quest to develop quantum computers, devices that would exploit quantum mechanics to crack problems that overwhelm conventional computers. The initiative comes as Google and other companies race to build a quantum computer that can demonstrate "quantum supremacy" by beating classical computers on a test problem. But reaching that milestone will not mean practical uses are at hand, and the new $40 million DOE effort is intended to spur the development of useful quantum computing algorithms for its work in chemistry, materials science, nuclear physics, and particle physics. With the resources at its 17 national laboratories, DOE could play a key role in developing the machines, researchers say, although finding problems with which quantum computers can help isn't so easy.
Evolution of a designless nanoparticle network into reconfigurable Boolean logic
NASA Astrophysics Data System (ADS)
Bose, S. K.; Lawrence, C. P.; Liu, Z.; Makarenko, K. S.; van Damme, R. M. J.; Broersma, H. J.; van der Wiel, W. G.
2015-12-01
Natural computers exploit the emergent properties and massive parallelism of interconnected networks of locally active components. Evolution has resulted in systems that compute quickly and that use energy efficiently, utilizing whatever physical properties are exploitable. Man-made computers, on the other hand, are based on circuits of functional units that follow given design rules. Hence, potentially exploitable physical processes, such as capacitive crosstalk, to solve a problem are left out. Until now, designless nanoscale networks of inanimate matter that exhibit robust computational functionality had not been realized. Here we artificially evolve the electrical properties of a disordered nanomaterials system (by optimizing the values of control voltages using a genetic algorithm) to perform computational tasks reconfigurably. We exploit the rich behaviour that emerges from interconnected metal nanoparticles, which act as strongly nonlinear single-electron transistors, and find that this nanoscale architecture can be configured in situ into any Boolean logic gate. This universal, reconfigurable gate would require about ten transistors in a conventional circuit. Our system meets the criteria for the physical realization of (cellular) neural networks: universality (arbitrary Boolean functions), compactness, robustness and evolvability, which implies scalability to perform more advanced tasks. Our evolutionary approach works around device-to-device variations and the accompanying uncertainties in performance. Moreover, it bears a great potential for more energy-efficient computation, and for solving problems that are very hard to tackle in conventional architectures.
Event parallelism: Distributed memory parallel computing for high energy physics experiments
NASA Astrophysics Data System (ADS)
Nash, Thomas
1989-12-01
This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.
On propagation of energy flux in de Sitter spacetime
NASA Astrophysics Data System (ADS)
Hoque, Sk Jahanur; Virmani, Amitabh
2018-04-01
In this paper, we explore propagation of energy flux in the future Poincaré patch of de Sitter spacetime. We present two results. First, we compute the flux integral of energy using the symplectic current density of the covariant phase space approach on hypersurfaces of constant radial physical distance. Using this computation we show that in the tt-projection, the integrand in the energy flux expression on the cosmological horizon is same as that on the future null infinity. This suggests that propagation of energy flux in de Sitter spacetime is sharp. Second, we relate our energy flux expression in tt-projection to a previously obtained expression using the Isaacson stress-tensor approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Sengupta, M.; Wilcox, S.
Models to compute Global Horizontal Irradiance (GHI) and Direct Normal Irradiance (DNI) have been in development over the last 3 decades. These models can be classified as empirical or physical, based on the approach. Empirical models relate ground based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the radiation received from the earth at the satellite and create retrievals to estimate surface radiation. While empirical methods have been traditionally used for computing surface radiation for the solar energy industry the advent of faster computing has made operational physical models viable. The Global Solarmore » Insolation Project (GSIP) is an operational physical model from NOAA that computes GHI using the visible and infrared channel measurements from the GOES satellites. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate surface radiation. NREL, University of Wisconsin and NOAA have recently collaborated to adapt GSIP to create a 4 km GHI and DNI product every 30 minutes. This paper presents an outline of the methodology and a comprehensive validation using high quality ground based solar data from the National Oceanic and Atmospheric Administration (NOAA) Surface Radiation (SURFRAD) (http://www.srrb.noaa.gov/surfrad/sitepage.html) and Integrated Surface Insolation Study (ISIS) http://www.srrb.noaa.gov/isis/isissites.html), the Solar Radiation Research Laboratory (SRRL) at National Renewable Energy Laboratory (NREL), and Sun Spot One (SS1) stations.« less
A Taxonomy on Accountability and Privacy Issues in Smart Grids
NASA Astrophysics Data System (ADS)
Naik, Ameya; Shahnasser, Hamid
2017-07-01
Cyber-Physical Systems (CPS) are combinations of computation, networking, and physical processes. Embedded computers and networks monitor control the physical processes, which affect computations and vice versa. Two applications of cyber physical systems include health-care and smart grid. In this paper, we have considered privacy aspects of cyber-physical system applicable to smart grid. Smart grid in collaboration with different stockholders can help in the improvement of power generation, communication, circulation and consumption. The proper management with monitoring feature by customers and utility of energy usage can be done through proper transmission and electricity flow; however cyber vulnerability could be increased due to an increased assimilation and linkage. This paper discusses various frameworks and architectures proposed for achieving accountability in smart grids by addressing privacy issues in Advance Metering Infrastructure (AMI). This paper also highlights additional work needed for accountability in more precise specifications such as uncertainty or ambiguity, indistinct, unmanageability, and undetectably.
Physics through the 1990s: Scientific interfaces and technological applications
NASA Technical Reports Server (NTRS)
1986-01-01
The volume examines the scientific interfaces and technological applications of physics. Twelve areas are dealt with: biological physics-biophysics, the brain, and theoretical biology; the physics-chemistry interface-instrumentation, surfaces, neutron and synchrotron radiation, polymers, organic electronic materials; materials science; geophysics-tectonics, the atmosphere and oceans, planets, drilling and seismic exploration, and remote sensing; computational physics-complex systems and applications in basic research; mathematics-field theory and chaos; microelectronics-integrated circuits, miniaturization, future trends; optical information technologies-fiber optics and photonics; instrumentation; physics applications to energy needs and the environment; national security-devices, weapons, and arms control; medical physics-radiology, ultrasonics, MNR, and photonics. An executive summary and many chapters contain recommendations regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs.
Nonlinear structural crack growth monitoring
Welch, Donald E.; Hively, Lee M.; Holdaway, Ray F.
2002-01-01
A method and apparatus are provided for the detection, through nonlinear manipulation of data, of an indicator of imminent failure due to crack growth in structural elements. The method is a process of determining energy consumption due to crack growth and correlating the energy consumption with physical phenomena indicative of a failure event. The apparatus includes sensors for sensing physical data factors, processors or the like for computing a relationship between the physical data factors and phenomena indicative of the failure event, and apparatus for providing notification of the characteristics and extent of such phenomena.
NASA Astrophysics Data System (ADS)
Eisenbach, Markus
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calhoon, E.C.; Starring, P.W. eds.
1959-08-01
Lectures given at the Ernest 0. Lawrence Radiation Laboratory on physics, biophysics, and chemistry for high school science teachers are presented. Topics covered include a mathematics review, atomic physics, nuclear physics, solid-state physics, elementary particles, antiparticies, design of experiments, high-energy particle accelerators, survey of particle detectors, emulsion as a particle detector, counters used in high-energy physics, bubble chambers, computer programming, chromatography, the transuranium elements, health physics, photosynthesis, the chemistry and physics of virus, the biology of virus, lipoproteins and heart disease, origin and evolution of the solar system, the role of space satellites in gathering astronomical data, and radiation andmore » life in space. (M.C.G.)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.
Enabled by petascale supercomputing, the next generation of computer models for wind energy will simulate a vast range of scales and physics, spanning from turbine structural dynamics and blade-scale turbulence to mesoscale atmospheric flow. A single model covering all scales and physics is not feasible. Thus, these simulations will require the coupling of different models/codes, each for different physics, interacting at their domain boundaries.
ERIC Educational Resources Information Center
Gates, David M.
These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. This report introduces two models of the thermal energy budget of a leaf. Typical values for…
NASA Technical Reports Server (NTRS)
1981-01-01
The development of a coal gasification system design and mass and energy balance simulation program for the TVA and other similar facilities is described. The materials-process-product model (MPPM) and the advanced system for process engineering (ASPEN) computer program were selected from available steady state and dynamic models. The MPPM was selected to serve as the basis for development of system level design model structure because it provided the capability for process block material and energy balance and high-level systems sizing and costing. The ASPEN simulation serves as the basis for assessing detailed component models for the system design modeling program. The ASPEN components were analyzed to identify particular process blocks and data packages (physical properties) which could be extracted and used in the system design modeling program. While ASPEN physical properties calculation routines are capable of generating physical properties required for process simulation, not all required physical property data are available, and must be user-entered.
Finite Element Analysis in Concurrent Processing: Computational Issues
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett
2004-01-01
The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, Jill; Corones, James; Batchelor, Donald
Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less
Physical properties of biological entities: an introduction to the ontology of physics for biology.
Cook, Daniel L; Bookstein, Fred L; Gennari, John H
2011-01-01
As biomedical investigators strive to integrate data and analyses across spatiotemporal scales and biomedical domains, they have recognized the benefits of formalizing languages and terminologies via computational ontologies. Although ontologies for biological entities-molecules, cells, organs-are well-established, there are no principled ontologies of physical properties-energies, volumes, flow rates-of those entities. In this paper, we introduce the Ontology of Physics for Biology (OPB), a reference ontology of classical physics designed for annotating biophysical content of growing repositories of biomedical datasets and analytical models. The OPB's semantic framework, traceable to James Clerk Maxwell, encompasses modern theories of system dynamics and thermodynamics, and is implemented as a computational ontology that references available upper ontologies. In this paper we focus on the OPB classes that are designed for annotating physical properties encoded in biomedical datasets and computational models, and we discuss how the OPB framework will facilitate biomedical knowledge integration. © 2011 Cook et al.
-performance Computing Grid Computing Networking Mass Storage Plan for the Future State of the Laboratory to help decipher the language of high-energy physics. Virtual Ask-a-Scientist Read transcripts from past online chat sessions. last modified 1/04/2005 email Fermilab Fermi National Accelerator Laboratory
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
Joint the Center for Applied Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd; Bremer, Timo; Van Essen, Brian
The Center for Applied Scientific Computing serves as Livermore Lab’s window to the broader computer science, computational physics, applied mathematics, and data science research communities. In collaboration with academic, industrial, and other government laboratory partners, we conduct world-class scientific research and development on problems critical to national security. CASC applies the power of high-performance computing and the efficiency of modern computational methods to the realms of stockpile stewardship, cyber and energy security, and knowledge discovery for intelligence applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cramer, Christopher J.
Charge transfer and charge transport in photoactivated systems are fundamental processes that underlie solar energy capture, solar energy conversion, and photoactivated catalysis, both organometallic and enzymatic. We developed methods, algorithms, and software tools needed for reliable treatment of the underlying physics for charge transfer and charge transport, an undertaking with broad applicability to the goals of the fundamental-interaction component of the Department of Energy Office of Basic Energy Sciences and the exascale initiative of the Office of Advanced Scientific Computing Research.
Study of the physical properties of Ge-S-Ga glassy alloy
NASA Astrophysics Data System (ADS)
Rana, Anjli; Sharma, Raman
2018-05-01
In the present work, we have studied the effect of Ga doping on the physical properties of Ge20S80-xGax glassy alloy. The basic physical parameters which have important role in determining the structure and strength of the material viz. average coordination number, lone-pair electrons, mean bond energy, glass transition temperature, electro negativity, probabilities for bond distribution and cohesive energy have been computed theoretically for Ge-S-Ga glassy alloy. Here, the glass transition temperature and mean bond energy have been investigated using the Tichy-Ticha approach. The cohesive energy has been calculated by using chemical bond approach (CBA) method. It has been found that while average coordination number increases, all the other parameters decrease with the increase in Ga content in Ge-S-Ga system.
Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, G.T.; Craven, R.E.
X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.
Energy and technology review, July--August, 1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, A.K.
1990-01-01
This report highlights various research programs conducted at the Lab to include: defense systems, laser research, fusion energy, biomedical and environmental sciences, engineering, physics, chemistry, materials science, and computational analysis. It also contains a statement on the state of the Lab and Laboratory Administration. (JEF)
Cosmic Strings Stabilized by Quantum Fluctuations
NASA Astrophysics Data System (ADS)
Weigel, H.
2017-03-01
Fermion quantum corrections to the energy of cosmic strings are computed. A number of rather technical tools are needed to formulate this correction, and isospin and gauge invariance are employed to verify consistency of these tools. These corrections must also be included when computing the energy of strings that are charged by populating fermion bound states in its background. It is found that charged strings are dynamically stabilized in theories similar to the standard model of particle physics.
1981-03-12
agriculture, raw materials, energy sources, computers, lasers , space and aeronautics, high energy physics, and genetics. The four modernizations will be...accomp- lished and the strong socialist country that is born at the end of the century will be a keyhole for the promotion of science and technology...Process (FNP). Its purpose is to connect with the Kiautsu University computer (model 108) and then to connect a data terminal . This will make a
NASA Astrophysics Data System (ADS)
Colvin, Jeff; Larsen, Jon
2013-11-01
Acknowledgements; 1. Extreme environments: what, where, how; 2. Properties of dense and classical plasmas; 3. Laser energy absorption in matter; 4. Hydrodynamic motion; 5. Shocks; 6. Equation of state; 7. Ionization; 8. Thermal energy transport; 9. Radiation energy transport; 10. Magnetohydrodynamics; 11. Considerations for constructing radiation-hydrodynamics computer codes; 12. Numerical simulations; Appendix: units and constants, glossary of symbols; References; Bibliography; Index.
Appropriate Use Policy | High-Performance Computing | NREL
users of the National Renewable Energy Laboratory (NREL) High Performance Computing (HPC) resources government agency, National Laboratory, University, or private entity, the intellectual property terms (if issued a multifactor token which may be a physical token or a virtual token used with one-time password
Computer program determines chemical composition of physical system at equilibrium
NASA Technical Reports Server (NTRS)
Kwong, S. S.
1966-01-01
FORTRAN 4 digital computer program calculates equilibrium composition of complex, multiphase chemical systems. This is a free energy minimization method with solution of the problem reduced to mathematical operations, without concern for the chemistry involved. Also certain thermodynamic properties are determined as byproducts of the main calculations.
Quo vadimus? - Much hard work is still needed
NASA Astrophysics Data System (ADS)
Toffoli, Tommaso
1998-09-01
Physical aspects of computation that just a few years ago appeared tentative and tenuous, such as energy recycling in computation and quantum computation, have now grown into full-fledged scientific businesses. Conversely, concepts born within physics, such as entropy and phase transitions, are now fully at home in computational contexts quite unrelated to physics. Countless symposia cannot exhaust the wealth of research that is turning up in these areas. The “Physics of Computation” workshops cannot and should not try to be an exhaustive forum for these more mature areas. I think it would be to everyone's advantage if the workshops tried to play a more specialized and more critical role; namely, to venture into uncharted territories and to do so with a sense of purpose and of direction. Here I briefly suggest a few possibilities; among these, the need to construct a general, model-independent concept of “amount of computation”, much as we already have one for “amount of information”. I suspect that, much as the inspiration and prototype for the latter was found in physical entropy, so the inspiration and prototype for the former will be found in physical action.
NASA Technical Reports Server (NTRS)
Biggerstaff, J. A. (Editor)
1985-01-01
Topics related to physics instrumentation are discussed, taking into account cryostat and electronic development associated with multidetector spectrometer systems, the influence of materials and counting-rate effects on He-3 neutron spectrometry, a data acquisition system for time-resolved muscle experiments, and a sensitive null detector for precise measurements of integral linearity. Other subjects explored are concerned with space instrumentation, computer applications, detectors, instrumentation for high energy physics, instrumentation for nuclear medicine, environmental monitoring and health physics instrumentation, nuclear safeguards and reactor instrumentation, and a 1984 symposium on nuclear power systems. Attention is given to the application of multiprocessors to scientific problems, a large-scale computer facility for computational aerodynamics, a single-board 32-bit computer for the Fastbus, the integration of detector arrays and readout electronics on a single chip, and three-dimensional Monte Carlo simulation of the electron avalanche in a proportional counter.
Gammaitoni, Luca; Chiuchiú, D; Madami, M; Carlotti, G
2015-06-05
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
NASA Astrophysics Data System (ADS)
Gammaitoni, Luca; Chiuchiú, D.; Madami, M.; Carlotti, G.
2015-06-01
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
The Physical Basis of the Ionosphere in the Solar-Terrestrial System.
1981-02-01
future. Another problem is related to the energy budget of the upper atmosphere. If the energy loss by airglow is neglected and if all heat sources...a result of detailed computations, i.e., not via an irretrievable loss of detailed known aspects within the computations. J.Forbes, US Wouldn’t the...assumptions about the loss rate, and then, so to say, expand the production rate Into a series of functions of the kind shown in Fig. I. The coefficients of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
Grid-Enabled High Energy Physics Research using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Mahmood, Akhtar
2005-04-01
At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Arnold, Jeffrey
2018-05-14
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
Electronic Structure Theory | Materials Science | NREL
design and discover materials for energy applications. This includes detailed studies of the physical computing. Key Research Areas Materials by Design NREL leads the U.S. Department of Energy's Center for Next Generation of Materials by Design, which incorporates metastability and synthesizability. Learn more about
ERIC Educational Resources Information Center
Stevenson, R. D.
This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. This module describes heat transfer processes involved in the exchange of heat…
ERIC Educational Resources Information Center
Stevenson, R. D.
These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. Several modules in the thermodynamic series considered the application of the First Law to…
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
Correlation energy functional within the GW -RPA: Exact forms, approximate forms, and challenges
NASA Astrophysics Data System (ADS)
Ismail-Beigi, Sohrab
2010-05-01
In principle, the Luttinger-Ward Green’s-function formalism allows one to compute simultaneously the total energy and the quasiparticle band structure of a many-body electronic system from first principles. We present approximate and exact expressions for the correlation energy within the GW -random-phase approximation that are more amenable to computation and allow for developing efficient approximations to the self-energy operator and correlation energy. The exact form is a sum over differences between plasmon and interband energies. The approximate forms are based on summing over screened interband transitions. We also demonstrate that blind extremization of such functionals leads to unphysical results: imposing physical constraints on the allowed solutions (Green’s functions) is necessary. Finally, we present some relevant numerical results for atomic systems.
The Mark III Hypercube-Ensemble Computers
NASA Technical Reports Server (NTRS)
Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe
1988-01-01
Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.
Large Scale Computing and Storage Requirements for High Energy Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard A.; Wasserman, Harvey
2010-11-24
The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less
Physical Analytics: An emerging field with real-world applications and impact
NASA Astrophysics Data System (ADS)
Hamann, Hendrik
2015-03-01
In the past most information on the internet has been originated by humans or computers. However with the emergence of cyber-physical systems, vast amount of data is now being created by sensors from devices, machines etc digitizing the physical world. While cyber-physical systems are subject to active research around the world, the vast amount of actual data generated from the physical world has attracted so far little attention from the engineering and physics community. In this presentation we use examples to highlight the opportunities in this new subject of ``Physical Analytics'' for highly inter-disciplinary research (including physics, engineering and computer science), which aims understanding real-world physical systems by leveraging cyber-physical technologies. More specifically, the convergence of the physical world with the digital domain allows applying physical principles to everyday problems in a much more effective and informed way than what was possible in the past. Very much like traditional applied physics and engineering has made enormous advances and changed our lives by making detailed measurements to understand the physics of an engineered device, we can now apply the same rigor and principles to understand large-scale physical systems. In the talk we first present a set of ``configurable'' enabling technologies for Physical Analytics including ultralow power sensing and communication technologies, physical big data management technologies, numerical modeling for physical systems, machine learning based physical model blending, and physical analytics based automation and control. Then we discuss in detail several concrete applications of Physical Analytics ranging from energy management in buildings and data centers, environmental sensing and controls, precision agriculture to renewable energy forecasting and management.
BES-III distributed computing status
NASA Astrophysics Data System (ADS)
Belov, S. D.; Deng, Z. Y.; Korenkov, V. V.; Li, W. D.; Lin, T.; Ma, Z. T.; Nicholson, C.; Pelevanyuk, I. S.; Suo, B.; Trofimov, V. V.; Tsaregorodtsev, A. U.; Uzhinskiy, A. V.; Yan, T.; Yan, X. F.; Zhang, X. M.; Zhemchugov, A. S.
2016-09-01
The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- annihilation in the energy range from 2.0 till 4.6 GeV. The world's largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations.
The Modeling of Vibration Damping in SMA Wires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, D R; Kloucek, P; Seidman, T I
Through a mathematical and computational model of the physical behavior of shape memory alloy wires, this study shows that localized heating and cooling of such materials provides an effective means of damping vibrational energy. The thermally induced pseudo-elastic behavior of a shape memory wire is modeled using a continuum thermodynamic model and solved computationally as described by the authors in [23]. Computational experiments confirm that up to 80% of an initial shock of vibrational energy can be eliminated at the onset of a thermally-induced phase transformation through the use of spatially-distributed transformation regions along the length of a shape memorymore » alloy wire.« less
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
[Computational chemistry in structure-based drug design].
Cao, Ran; Li, Wei; Sun, Han-Zi; Zhou, Yu; Huang, Niu
2013-07-01
Today, the understanding of the sequence and structure of biologically relevant targets is growing rapidly and researchers from many disciplines, physics and computational science in particular, are making significant contributions to modern biology and drug discovery. However, it remains challenging to rationally design small molecular ligands with desired biological characteristics based on the structural information of the drug targets, which demands more accurate calculation of ligand binding free-energy. With the rapid advances in computer power and extensive efforts in algorithm development, physics-based computational chemistry approaches have played more important roles in structure-based drug design. Here we reviewed the newly developed computational chemistry methods in structure-based drug design as well as the elegant applications, including binding-site druggability assessment, large scale virtual screening of chemical database, and lead compound optimization. Importantly, here we address the current bottlenecks and propose practical solutions.
Performance profiling for brachytherapy applications
NASA Astrophysics Data System (ADS)
Choi, Wonqook; Cho, Kihyeon; Yeo, Insung
2018-05-01
In many physics applications, a significant amount of software (e.g. R, ROOT and Geant4) is developed on novel computing architectures, and much effort is expended to ensure the software is efficient in terms of central processing unit (CPU) time and memory usage. Profiling tools are used during the evaluation process to evaluate the efficiency; however, few such tools are able to accommodate low-energy physics regions. To address this limitation, we developed a low-energy physics profiling system in Geant4 to profile the CPU time and memory of software applications in brachytherapy applications. This paper describes and evaluates specific models that are applied to brachytherapy applications in Geant4, such as QGSP_BIC_LIV, QGSP_BIC_EMZ, and QGSP_BIC_EMY. The physics range in this tool allows it to be used to generate low energy profiles in brachytherapy applications. This was a limitation in previous studies, which caused us to develop a new profiling tool that supports profiling in the MeV range, in contrast to the TeV range that is supported by existing high-energy profiling tools. In order to easily compare the profiling results between low-energy and high-energy modes, we employed the same software architecture as that in the SimpliCarlo tool developed at the Fermilab National Accelerator Laboratory (FNAL) for the Large Hadron Collider (LHC). The results show that the newly developed profiling system for low-energy physics (less than MeV) complements the current profiling system used for high-energy physics (greater than TeV) applications.
Oklahoma Center for High Energy Physics (OCHEP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, S; Strauss, M J; Snow, J
2012-02-29
The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less
Algorithm Development for the Multi-Fluid Plasma Model
2011-05-30
392, Sep 1995. [13] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm. Journal of Computational...Physics, 157(2):618–653, 2000. [14] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm - II
1983-06-03
Agriculture Construction and Related Industries Consumer Goods and Domestic Trade Economic Affairs Energy Human Resources International Economic...Relations Transportation Physics and Mathmetics Space Space Biology and Aerospace Medicine Military Affairs Chemistry Cybernetics, Computers...tons of "black , it is necessary to mention the n enormous contribution to the atar ASSR’s oil industry. The iation /Tatar ASSR Production
Argonne National Laboratory High Energy Physics Division Email Information Problem Report Service outlook.office365.com. Your mailbox on this server is created along with your ANL Domain account. All of your
Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-01-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878
NASA Astrophysics Data System (ADS)
Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-09-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.
Anderson, P. S. L.; Rayfield, E. J.
2012-01-01
Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789
University of Arizona High Energy Physics Program at the Cosmic Frontier 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
abate, alex; cheu, elliott
This is the final technical report from the University of Arizona High Energy Physics program at the Cosmic Frontier covering the period 2014-2016. The work aims to advance the understanding of dark energy using the Large Synoptic Survey Telescope (LSST). Progress on the engineering design of the power supplies for the LSST camera is discussed. A variety of contributions to photometric redshift measurement uncertainties were studied. The effect of the intergalactic medium on the photometric redshift of very distant galaxies was evaluated. Computer code was developed realizing the full chain of calculations needed to accurately and efficiently run large-scale simulations.
INTERNATIONAL CONFERENCE ON ULTRASHORT HIGH-ENERGY RADIATION AND MATTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, A J
2004-01-15
The workshop is intended as a forum to discuss the latest experimental, theoretical and computational results related to the interaction of high energy radiation with matter. High energy is intended to mean soft x-ray and beyond, but important new results from visible systems will be incorporated. The workshop will be interdisciplinary amongst scientists from many fields, including: plasma physics; x-ray physics and optics; solid state physics and material science; biology ; quantum optics. Topics will include, among other subjects: understanding damage thresholds for x-ray interactions with matter developing {approx} 5 keV x-ray sources to investigate damage; developing {approx} 100 keVmore » Thomsom sources for material studies; developing short pulse (100 fs and less) x-ray diagnostics; developing novel X-ray optics; and developing models for the response of biological samples to ultra intense, sub ps x-rays high-energy radiation.« less
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
A Program for Clinical Care in Physical Trauma--Combat Surgery and Bioengineering.
and energy exchange; Bone composition and fractures; Computer technology in intensive care; Manitol toxicity; Liver blood flow transplantation; Infections and immunology--Candida infection and Pseudomonas immunity. (Author)
Future computing platforms for science in a power constrained era
Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; ...
2015-12-23
Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less
Photonic Design: From Fundamental Solar Cell Physics to Computational Inverse Design
NASA Astrophysics Data System (ADS)
Miller, Owen Dennis
Photonic innovation is becoming ever more important in the modern world. Optical systems are dominating shorter and shorter communications distances, LED's are rapidly emerging for a variety of applications, and solar cells show potential to be a mainstream technology in the energy space. The need for novel, energy-efficient photonic and optoelectronic devices will only increase. This work unites fundamental physics and a novel computational inverse design approach towards such innovation. The first half of the dissertation is devoted to the physics of high-efficiency solar cells. As solar cells approach fundamental efficiency limits, their internal physics transforms. Photonic considerations, instead of electronic ones, are the key to reaching the highest voltages and efficiencies. Proper photon management led to Alta Device's recent dramatic increase of the solar cell efficiency record to 28.3%. Moreover, approaching the Shockley-Queisser limit for any solar cell technology will require light extraction to become a part of all future designs. The second half of the dissertation introduces inverse design as a new computational paradigm in photonics. An assortment of techniques (FDTD, FEM, etc.) have enabled quick and accurate simulation of the "forward problem" of finding fields for a given geometry. However, scientists and engineers are typically more interested in the inverse problem: for a desired functionality, what geometry is needed? Answering this question breaks from the emphasis on the forward problem and forges a new path in computational photonics. The framework of shape calculus enables one to quickly find superior, non-intuitive designs. Novel designs for optical cloaking and sub-wavelength solar cell applications are presented.
Michael H. L. S. Wang; Cancelo, Gustavo; Green, Christopher; ...
2016-06-25
Here, we explore the Micron Automata Processor (AP) as a suitable commodity technology that can address the growing computational needs of pattern recognition in High Energy Physics (HEP) experiments. A toy detector model is developed for which an electron track confirmation trigger based on the Micron AP serves as a test case. Although primarily meant for high speed text-based searches, we demonstrate a proof of concept for the use of the Micron AP in a HEP trigger application.
A statistical physics viewpoint on the dynamics of the bouncing ball
NASA Astrophysics Data System (ADS)
Chastaing, Jean-Yonnel; Géminard, Jean-Christophe; Bertin, Eric
2016-06-01
We compute, in a statistical physics perspective, the dynamics of a bouncing ball maintained in a chaotic regime thanks to collisions with a plate experiencing an aperiodic vibration. We analyze in details the energy exchanges between the bead and the vibrating plate, and show that the coupling between the bead and the plate can be modeled in terms of both a dissipative process and an injection mechanism by an energy reservoir. An analysis of the injection statistics in terms of fluctuation relation is also provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael H. L. S. Wang; Cancelo, Gustavo; Green, Christopher
Here, we explore the Micron Automata Processor (AP) as a suitable commodity technology that can address the growing computational needs of pattern recognition in High Energy Physics (HEP) experiments. A toy detector model is developed for which an electron track confirmation trigger based on the Micron AP serves as a test case. Although primarily meant for high speed text-based searches, we demonstrate a proof of concept for the use of the Micron AP in a HEP trigger application.
High-energy physics software parallelization using database techniques
NASA Astrophysics Data System (ADS)
Argante, E.; van der Stok, P. D. V.; Willers, I.
1997-02-01
A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.
BESIU Physical Analysis on Hadoop Platform
NASA Astrophysics Data System (ADS)
Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing
2014-06-01
In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.
Students from Pueblo Triumph in Colorado Science Bowl
questions about physics, math, biology, astronomy, chemistry, computers and the earth sciences, students science and math. The competition has evolved into one of the Energy Department's premier educational
Students from Aurora Triumph in Denver Regional Science Bowl
questions about physics, math, biology, astronomy, chemistry, computers and the earth sciences, students science and math. The competition has evolved into one of the Energy Department's premier educational
ERIC Educational Resources Information Center
Science News, 1983
1983-01-01
Highlights important 1983 news stories reported in Science News. Stories are categorized under: anthropology/paleontology; behavior; biology; chemistry; earth sciences; energy; environment; medicine; physics; science and society; space sciences and astronomy; and technology and computers. (JN)
NASA Technical Reports Server (NTRS)
Schmit, Ryan
2010-01-01
To develop New Flow Control Techniques: a) Knowledge of the Flow Physics with and without control. b) How does Flow Control Effect Flow Physics (What Works to Optimize the Design?). c) Energy or Work Efficiency of the Control Technique (Cost - Risk - Benefit Analysis). d) Supportability, e.g. (size of equipment, computational power, power supply) (Allows Designer to include Flow Control in Plans).
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Finding a roadmap to achieve large neuromorphic hardware systems
Hasler, Jennifer; Marr, Bo
2013-01-01
Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Hack, James; Riley, Katherine
The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
Potential implementation of reservoir computing models based on magnetic skyrmions
NASA Astrophysics Data System (ADS)
Bourianoff, George; Pinna, Daniele; Sitte, Matthias; Everschor-Sitte, Karin
2018-05-01
Reservoir Computing is a type of recursive neural network commonly used for recognizing and predicting spatio-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The Reservoir Computing paradigm does not require any knowledge of the reservoir topology or node weights for training purposes and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts to implement reservoir computing prior to this have focused on utilizing memristor techniques to implement recursive neural networks. This paper examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We argue that their nonlinear dynamical interplay resulting from anisotropic magnetoresistance and spin-torque effects allows for an effective and energy efficient nonlinear processing of spatial temporal events with the aim of event recognition and prediction.
Computational studies of physical properties of Nb-Si based alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, Lizhi
2015-04-16
The overall goal is to provide physical properties data supplementing experiments for thermodynamic modeling and other simulations such as phase filed simulation for microstructure and continuum simulations for mechanical properties. These predictive computational modeling and simulations may yield insights that can be used to guide materials design, processing, and manufacture. Ultimately, they may lead to usable Nb-Si based alloy which could play an important role in current plight towards greener energy. The main objectives of the proposed projects are: (1) developing a first principles method based supercell approach for calculating thermodynamic and mechanic properties of ordered crystals and disordered latticesmore » including solid solution; (2) application of the supercell approach to Nb-Si base alloy to compute physical properties data that can be used for thermodynamic modeling and other simulations to guide the optimal design of Nb-Si based alloy.« less
NASA Astrophysics Data System (ADS)
Fasel, Markus
2016-10-01
High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.
HEPData: a repository for high energy physics data
NASA Astrophysics Data System (ADS)
Maguire, Eamonn; Heinrich, Lukas; Watt, Graeme
2017-10-01
The Durham High Energy Physics Database (HEPData) has been built up over the past four decades as a unique open-access repository for scattering data from experimental particle physics papers. It comprises data points underlying several thousand publications. Over the last two years, the HEPData software has been completely rewritten using modern computing technologies as an overlay on the Invenio v3 digital library framework. The software is open source with the new site available at https://hepdata.net now replacing the previous site at http://hepdata.cedar.ac.uk. In this write-up, we describe the development of the new site and explain some of the advantages it offers over the previous platform.
Students From Highlands Ranch Triumph in Colorado Science Bowl
final round of rapid-fire questions about physics, math, biology, astronomy, chemistry, computers and interest in science and math. The competition has evolved into one of the Energy Department's premier
Partitioning a macroscopic system into independent subsystems
NASA Astrophysics Data System (ADS)
Delle Site, Luigi; Ciccotti, Giovanni; Hartmann, Carsten
2017-08-01
We discuss the problem of partitioning a macroscopic system into a collection of independent subsystems. The partitioning of a system into replica-like subsystems is nowadays a subject of major interest in several fields of theoretical and applied physics. The thermodynamic approach currently favoured by practitioners is based on a phenomenological definition of an interface energy associated with the partition, due to a lack of easily computable expressions for a microscopic (i.e. particle-based) interface energy. In this article, we outline a general approach to derive sharp and computable bounds for the interface free energy in terms of microscopic statistical quantities. We discuss potential applications in nanothermodynamics and outline possible future directions.
NASA Astrophysics Data System (ADS)
Law, Ka-Hei; Gordon, Karl D.; Misselt, Karl A.
2018-06-01
Understanding the properties of stellar populations and interstellar dust has important implications for galaxy evolution. In normal star-forming galaxies, stars and the interstellar medium dominate the radiation from ultraviolet (UV) to infrared (IR). In particular, interstellar dust absorbs and scatters UV and optical light, re-emitting the absorbed energy in the IR. This is a strongly nonlinear process that makes independent studies of the UV-optical and IR susceptible to large uncertainties and degeneracies. Over the years, UV to IR spectral energy distribution (SED) fitting utilizing varying approximations has revealed important results on the stellar and dust properties of galaxies. Yet the approximations limit the fidelity of the derived properties. There is sufficient computer power now available that it is now possible to remove these approximations and map out of landscape of galaxy SEDs using full dust radiative transfer. This improves upon previous work by directly connecting the UV, optical, and IR through dust grain physics. We present the DIRTYGrid, a grid of radiative transfer models of SEDs of dusty stellar populations in galactic environments designed to span the full range of physical parameters of galaxies. Using the stellar and gas radiation input from the stellar population synthesis model PEGASE, our radiative transfer model DIRTY self-consistently computes the UV to far-IR/sub-mm SEDs for each set of parameters in our grid. DIRTY computes the dust absorption, scattering, and emission from the local radiation field and a dust grain model, thereby physically connecting the UV-optical to the IR. We describe the computational method and explain the choices of parameters in DIRTYGrid. The computation took millions of CPU hours on supercomputers, and the SEDs produced are an invaluable tool for fitting multi-wavelength data sets. We provide the complete set of SEDs in an online table.
Computation of NLO processes involving heavy quarks using Loop-Tree Duality
NASA Astrophysics Data System (ADS)
Driencourt-Mangin, Félix
2017-03-01
We present a new method to compute higher-order corrections to physical cross-sections, at Next-to-Leading Order and beyond. This method, based on the Loop Tree Duality, leads to locally integrable expressions in four dimensions. By introducing a physically motivated momentum mapping between the momenta involved in the real and the virtual contributions, infrared singularities naturally cancel at integrand level, without the need to introduce subtraction counter-terms. Ultraviolet singularities are dealt with by using dual representations of suitable counter-terms, with some subtleties regarding the self-energy contributions. As an example, we apply this method to compute the 1 → 2 decay rate in the context of a scalar toy model with massive particles.
Jensen-Otsu, Elsbeth; Austin, Gregory L
2015-11-20
Antidepressants have been associated with weight gain, but the causes are unclear. The aims of this study were to assess the association of antidepressant use with energy intake, macronutrient diet composition, and physical activity. We used data on medication use, energy intake, diet composition, and physical activity for 3073 eligible adults from the 2005-2006 National Health and Nutrition Examination Survey (NHANES). Potential confounding variables, including depression symptoms, were included in the models assessing energy intake, physical activity, and sedentary behavior. Antidepressant users reported consuming an additional (mean ± S.E.) 215 ± 73 kcal/day compared to non-users (p = 0.01). There were no differences in percent calories from sugar, fat, or alcohol between the two groups. Antidepressant users had similar frequencies of walking or biking, engaging in muscle-strengthening activities, and engaging in moderate or vigorous physical activity. Antidepressant users were more likely to use a computer for ≥2 h/day (OR 1.77; 95% CI: 1.09-2.90), but TV watching was similar between the two groups. These results suggest increased energy intake and sedentary behavior may contribute to weight gain associated with antidepressant use. Focusing on limiting food intake and sedentary behaviors may be important in mitigating the weight gain associated with antidepressant use.
Jensen-Otsu, Elsbeth; Austin, Gregory L.
2015-01-01
Antidepressants have been associated with weight gain, but the causes are unclear. The aims of this study were to assess the association of antidepressant use with energy intake, macronutrient diet composition, and physical activity. We used data on medication use, energy intake, diet composition, and physical activity for 3073 eligible adults from the 2005–2006 National Health and Nutrition Examination Survey (NHANES). Potential confounding variables, including depression symptoms, were included in the models assessing energy intake, physical activity, and sedentary behavior. Antidepressant users reported consuming an additional (mean ± S.E.) 215 ± 73 kcal/day compared to non-users (p = 0.01). There were no differences in percent calories from sugar, fat, or alcohol between the two groups. Antidepressant users had similar frequencies of walking or biking, engaging in muscle-strengthening activities, and engaging in moderate or vigorous physical activity. Antidepressant users were more likely to use a computer for ≥2 h/day (OR 1.77; 95% CI: 1.09–2.90), but TV watching was similar between the two groups. These results suggest increased energy intake and sedentary behavior may contribute to weight gain associated with antidepressant use. Focusing on limiting food intake and sedentary behaviors may be important in mitigating the weight gain associated with antidepressant use. PMID:26610562
Computing Interactions Of Free-Space Radiation With Matter
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.
1995-01-01
High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.
Computational nuclear quantum many-body problem: The UNEDF project
NASA Astrophysics Data System (ADS)
Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.
2013-10-01
The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.
UFMulti: A new parallel processing software system for HEP
NASA Astrophysics Data System (ADS)
Avery, Paul; White, Andrew
1989-12-01
UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstation or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future.
Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P
1994-02-01
We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and (2) they help the user understand how different energy terms interact to stabilize a given conformation. The Sculpt paradigm combines many of the best features of interactive graphical modeling, energy minimization, and actual physical models, and we propose it as an especially productive way to use current and future increases in computer speed.
Autonomous perception and decision making in cyber-physical systems
NASA Astrophysics Data System (ADS)
Sarkar, Soumik
2011-07-01
The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and buildings, distributed energy systems, advanced health care procedures and future ground and air transportation systems.
High School Students Gear Up for Battle of the Brains
answer tournament, which focuses on physics, math, biology, astronomy, chemistry, computers and the earth to help stimulate interest in science and math. The competition has evolved into one of the Energy
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
A Validation Framework for the Long Term Preservation of High Energy Physics Data
NASA Astrophysics Data System (ADS)
Ozerov, Dmitri; South, David M.
2014-06-01
The study group on data preservation in high energy physics, DPHEP, is moving to a new collaboration structure, which will focus on the implementation of preservation projects, such as those described in the group's large scale report published in 2012. One such project is the development of a validation framework, which checks the compatibility of evolving computing environments and technologies with the experiments software for as long as possible, with the aim of substantially extending the lifetime of the analysis software, and hence of the usability of the data. The framework is designed to automatically test and validate the software and data of an experiment against changes and upgrades to the computing environment, as well as changes to the experiment software itself. Technically, this is realised using a framework capable of hosting a number of virtual machine images, built with different configurations of operating systems and the relevant software, including any necessary external dependencies.
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
A Theoretical Investigation of the Input Characteristics of a Rectangular Cavity-Backed Slot Antenna
NASA Technical Reports Server (NTRS)
Cockrell, C. R.
1975-01-01
Equations which represent the magnetic and electric stored energies are derived for an infinite section of rectangular waveguide and a rectangular cavity. These representations which are referred to as being physically observable are obtained by considering the difference in the volume integrals appearing in the complex Poynting theorem. It is shown that the physically observable stored energies are determined by the field components that vanish in a reference plane outside the aperture. These physically observable representations are used to compute the input admittance of a rectangular cavity-backed slot antenna in which a single propagating wave is assumed to exist in the cavity. The slot is excited by a voltage source connected across its center; a sinusoidal distribution is assumed in the slot. Input-admittance calculations are compared with measured data. In addition, input-admittance curves as a function of electrical slot length are presented for several size cavities. For the rectangular cavity backed slot antenna, the quality factor and relative bandwidth were computed independently by using these energy relationships. It is shown that the asymptotic relationship which is usually assumed to exist between the quality bandwidth and the reciprocal of relative bandwidth is equally valid for the rectangular cavity backed slot antenna.
Virtual gonio-spectrophotometer for validation of BRDF designs
NASA Astrophysics Data System (ADS)
Mihálik, Andrej; Ďurikovič, Roman
2011-10-01
Measurement of the appearance of an object consists of a group of measurements to characterize the color and surface finish of the object. This group of measurements involves the spectral energy distribution of propagated light measured in terms of reflectance and transmittance, and the spatial energy distribution of that light measured in terms of the bidirectional reflectance distribution function (BRDF). In this article we present the virtual gonio-spectrophotometer, a device that measures flux (power) as a function of illumination and observation. Virtual gonio-spectrophotometer measurements allow the determination of the scattering profile of specimens that can be used to verify the physical characteristics of the computer model used to simulate the scattering profile. Among the characteristics that we verify is the energy conservation of the computer model. A virtual gonio-spectrophotometer is utilized to find the correspondence between industrial measurements obtained from gloss meters and the parameters of a computer reflectance model.
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
The prediction of the thermal properties of solids and liquids is central to numerous problems in condensed matter physics and materials science, including the study of thermal management of opto-electronic and energy conversion devices. We present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at non equilibrium conditions. Our formulation is based on a generalization of the approach to equilibrium technique, using sinusoidal temperature gradients, and it only requires calculations of first principles trajectories and atomic forces. We discuss results and computational requirements for a representative, simple oxide, MgO, and compare with experiments and data obtained with classical potentials. This work was supported by MICCoM as part of the Computational Materials Science Program funded by the U.S. Department of Energy (DOE), Office of Science , Basic Energy Sciences (BES), Materials Sciences and Engineering Division under Grant DOE/BES 5J-30.
Additions and improvements to the high energy density physics capabilities in the FLASH code
NASA Astrophysics Data System (ADS)
Lamb, D. Q.; Flocke, N.; Graziani, C.; Tzeferacos, P.; Weide, K.
2016-10-01
FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities have been added to FLASH to make it an open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. In particular, we showcase the ability of FLASH to simulate the Faraday Rotation Measure produced by the presence of magnetic fields; and proton radiography, proton self-emission, and Thomson scattering diagnostics with and without the presence of magnetic fields. We also describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under Grant PHY-0903997.
Error suppression for Hamiltonian quantum computing in Markovian environments
NASA Astrophysics Data System (ADS)
Marvian, Milad; Lidar, Daniel A.
2017-03-01
Hamiltonian quantum computing, such as the adiabatic and holonomic models, can be protected against decoherence using an encoding into stabilizer subspace codes for error detection and the addition of energy penalty terms. This method has been widely studied since it was first introduced by Jordan, Farhi, and Shor (JFS) in the context of adiabatic quantum computing. Here, we extend the original result to general Markovian environments, not necessarily in Lindblad form. We show that the main conclusion of the original JFS study holds under these general circumstances: Assuming a physically reasonable bath model, it is possible to suppress the initial decay out of the encoded ground state with an energy penalty strength that grows only logarithmically in the system size, at a fixed temperature.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
[Physical activity, dietary habits and plasma lipoproteins in young men and women].
Malara, Marzena; Lutosławska, Grazyna
2010-01-01
There are studies suggesting that in young women strenuous physical activity and inadequate daily energy intake cause unfavorable changes in lipoprotein profile. However until know data concerning this issue are contradictory, possibly due to small number of participants. This study aimed at evaluation of lipoprotein profile in young men and women with different weekly physical activity together with their dietary habits. A total of 202 subjects volunteered to participate of the study--54 female and 56 male students of physical education and 46 female and 49 male students representing other specialization. Daily energy and macronutrient intakes were assessed using FOOD 2 computer program. Plasma TG, TC and HDL-C were assayed colorimetically using Randox commercial kits (Great Britain). It has been demonstrated that high physical activity adversely affects lipoprotein profile in young women characterized by higher TC and LDL-C in comparison with women with low physical activity and with men with high physical activity. The effect of high physical activity on plasma lipoproteins is equivocal. Active men are characterized by higher HDL, but also by higher frequency of unfavorable plasma TC and similar frequency of unfavorable plasma LDL-C C as compared with their less active counterparts. The mean daily energy intake in highly active men and women covered 82% and 72.2% recommended intake, respectively. It seems feasible that in both sexes high physical activity and inadequate energy intake brings about unfavorable changes in plasma lipoproteins.
Deep Wavelet Scattering for Quantum Energy Regression
NASA Astrophysics Data System (ADS)
Hirn, Matthew
Physical functionals are usually computed as solutions of variational problems or from solutions of partial differential equations, which may require huge computations for complex systems. Quantum chemistry calculations of ground state molecular energies is such an example. Indeed, if x is a quantum molecular state, then the ground state energy E0 (x) is the minimum eigenvalue solution of the time independent Schrödinger Equation, which is computationally intensive for large systems. Machine learning algorithms do not simulate the physical system but estimate solutions by interpolating values provided by a training set of known examples {(xi ,E0 (xi) } i <= n . However, precise interpolations may require a number of examples that is exponential in the system dimension, and are thus intractable. This curse of dimensionality may be circumvented by computing interpolations in smaller approximation spaces, which take advantage of physical invariants. Linear regressions of E0 over a dictionary Φ ={ϕk } k compute an approximation E 0 as: E 0 (x) =∑kwkϕk (x) , where the weights {wk } k are selected to minimize the error between E0 and E 0 on the training set. The key to such a regression approach then lies in the design of the dictionary Φ. It must be intricate enough to capture the essential variability of E0 (x) over the molecular states x of interest, while simple enough so that evaluation of Φ (x) is significantly less intensive than a direct quantum mechanical computation (or approximation) of E0 (x) . In this talk we present a novel dictionary Φ for the regression of quantum mechanical energies based on the scattering transform of an intermediate, approximate electron density representation ρx of the state x. The scattering transform has the architecture of a deep convolutional network, composed of an alternating sequence of linear filters and nonlinear maps. Whereas in many deep learning tasks the linear filters are learned from the training data, here the physical properties of E0 (invariance to isometric transformations of the state x, stable to deformations of x) are leveraged to design a collection of linear filters ρx *ψλ for an appropriate wavelet ψ. These linear filters are composed with the nonlinear modulus operator, and the process is iterated upon so that at each layer stable, invariant features are extracted: ϕk (x) = ∥ | | ρx *ψλ1 | * ψλ2 | * ... *ψλm ∥ , k = (λ1 , ... ,λm) , m = 1 , 2 , ... The scattering transform thus encodes not only interactions at multiple scales (in the first layer, m = 1), but also features that encode complex phenomena resulting from a cascade of interactions across scales (in subsequent layers, m >= 2). Numerical experiments give state of the art accuracy over data bases of organic molecules, while theoretical results guarantee performance for the component of the ground state energy resulting from Coulombic interactions. Supported by the ERC InvariantClass 320959 Grant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Windus, Theresa; Banda, Michael; Devereaux, Thomas
Computers have revolutionized every aspect of our lives. Yet in science, the most tantalizing applications of computing lie just beyond our reach. The current quest to build an exascale computer with one thousand times the capability of today’s fastest machines (and more than a million times that of a laptop) will take researchers over the next horizon. The field of materials, chemical reactions, and compounds is inherently complex. Imagine millions of new materials with new functionalities waiting to be discovered — while researchers also seek to extend those materials that are known to a dizzying number of new forms. Wemore » could translate massive amounts of data from high precision experiments into new understanding through data mining and analysis. We could have at our disposal the ability to predict the properties of these materials, to follow their transformations during reactions on an atom-by-atom basis, and to discover completely new chemical pathways or physical states of matter. Extending these predictions from the nanoscale to the mesoscale, from the ultrafast world of reactions to long-time simulations to predict the lifetime performance of materials, and to the discovery of new materials and processes will have a profound impact on energy technology. In addition, discovery of new materials is vital to move computing beyond Moore’s law. To realize this vision, more than hardware is needed. New algorithms to take advantage of the increase in computing power, new programming paradigms, and new ways of mining massive data sets are needed as well. This report summarizes the opportunities and the requisite computing ecosystem needed to realize the potential before us. In addition to pursuing new and more complete physical models and theoretical frameworks, this review found that the following broadly grouped areas relevant to the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) would directly affect the Basic Energy Sciences (BES) mission need. Simulation, visualization, and data analysis are crucial for advances in energy science and technology. Revolutionary mathematical, software, and algorithm developments are required in all areas of BES science to take advantage of exascale computing architectures and to meet data analysis, management, and workflow needs. In partnership with ASCR, BES has an emerging and pressing need to develop new and disruptive capabilities in data science. More capable and larger high-performance computing (HPC) and data ecosystems are required to support priority research in BES. Continued success in BES research requires developing the next-generation workforce through education and training and by providing sustained career opportunities.« less
Optimization and Control of Cyber-Physical Vehicle Systems
Bradley, Justin M.; Atkins, Ella M.
2015-01-01
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541
Optimization and Control of Cyber-Physical Vehicle Systems.
Bradley, Justin M; Atkins, Ella M
2015-09-11
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.
Properties of potential eco-friendly gas replacements for particle detectors in high-energy physics
NASA Astrophysics Data System (ADS)
Saviano, G.; Ferrini, M.; Benussi, L.; Bianco, S.; Piccolo, D.; Colafranceschi, S.; KjØlbro, J.; Sharma, A.; Yang, D.; Chen, G.; Ban, Y.; Li, Q.; Grassini, S.; Parvis, M.
2018-03-01
Gas detectors for elementary particles require F-based gases for optimal performance. Recent regulations demand the use of environmentally unfriendly F-based gases to be limited or banned. This work studies properties of potential eco-friendly gas replacements by computing the physical and chemical parameters relevant for use as detector media, and suggests candidates to be considered for experimental investigation.
Proceedings of the workshop on B physics at hadron accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
McBride, P.; Mishra, C.S.
1993-12-31
This report contains papers on the following topics: Measurement of Angle {alpha}; Measurement of Angle {beta}; Measurement of Angle {gamma}; Other B Physics; Theory of Heavy Flavors; Charged Particle Tracking and Vertexing; e and {gamma} Detection; Muon Detection; Hadron ID; Electronics, DAQ, and Computing; and Machine Detector Interface. Selected papers have been indexed separately for inclusion the in Energy Science and Technology Database.
Physics through the 1990s: Gravitation, cosmology and cosmic-ray physics
NASA Technical Reports Server (NTRS)
1986-01-01
The volume contains recommendations for space-and ground-based programs in gravitational physics, cosmology, and cosmic-ray physics. The section on gravitation examines current and planned experimental tests of general relativity; the theory behind, and search for, gravitational waves, including sensitive laser-interferometric tests and other observations; and advances in gravitation theory (for example, incorporating quantum effects). The section on cosmology deals with the big-bang model, the standard model from elementary-particle theory, the inflationary model of the Universe. Computational needs are presented for both gravitation and cosmology. Finally, cosmic-ray physics theory (nucleosynthesis, acceleration models, high-energy physics) and experiment (ground and spaceborne detectors) are discussed.
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
Physics-based distributed snow models in the operational arena: Current and future challenges
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.
2017-12-01
The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...
2015-02-05
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrail, B.P.; Mahoney, L.A.
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less
NASA Astrophysics Data System (ADS)
Hartmann Siantar, Christine L.; Moses, Edward I.
1998-11-01
When using radiation to treat cancer, doctors rely on physics and computer technology to predict where the radiation dose will be deposited in the patient. The accuracy of computerized treatment planning plays a critical role in the ultimate success or failure of the radiation treatment. Inaccurate dose calculations can result in either insufficient radiation for cure, or excessive radiation to nearby healthy tissue, which can reduce the patient's quality of life. This paper describes how advanced physics, computer, and engineering techniques originally developed for nuclear weapons and high-energy physics research are being used to predict radiation dose in cancer patients. Results for radiation therapy planning, achieved in the Lawrence Livermore National Laboratory (LLNL) 0143-0807/19/6/005/img2 program show that these tools can give doctors new insights into their patients' treatments by providing substantially more accurate dose distributions than have been available in the past. It is believed that greater accuracy in radiation therapy treatment planning will save lives by improving doctors' ability to target radiation to the tumour and reduce suffering by reducing the incidence of radiation-induced complications.
Physical stress, mass, and energy for non-relativistic matter
NASA Astrophysics Data System (ADS)
Geracie, Michael; Prabhu, Kartik; Roberts, Matthew M.
2017-06-01
For theories of relativistic matter fields there exist two possible definitions of the stress-energy tensor, one defined by a variation of the action with the coframes at fixed connection, and the other at fixed torsion. These two stress-energy tensors do not necessarily coincide and it is the latter that corresponds to the Cauchy stress measured in the lab. In this note we discuss the corresponding issue for non-relativistic matter theories. We point out that while the physical non-relativistic stress, momentum, and mass currents are defined by a variation of the action at fixed torsion, the energy current does not admit such a description and is naturally defined at fixed connection. Any attempt to define an energy current at fixed torsion results in an ambiguity which cannot be resolved from the background spacetime data or conservation laws. We also provide computations of these quantities for some simple non-relativistic actions.
Contributions to the NUCLEI SciDAC-3 Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogner, Scott; Nazarewicz, Witek
This is the Final Report for Michigan State University for the NUCLEI SciDAC-3 project. The NUCLEI project, as defined by the scope of work, has developed, implemented and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics studied included the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques used included Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program emphasized areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS at ANL and FRIB at MSU (nuclear structuremore » and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrinoless double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keasler, Samuel J., E-mail: samuel.keasler@vcsu.edu; Department of Science, Valley City State University, 101 College Street SW, Valley City, North Dakota 58072; Siepmann, J. Ilja
2015-10-28
Simulations are used to investigate the vapor-to-liquid nucleation of water for several different force fields at various sets of physical conditions. The nucleation free energy barrier is found to be extremely sensitive to the force field at the same absolute conditions. However, when the results are compared at the same supersaturation and reduced temperature or the same metastability parameter and reduced temperature, then the differences in the nucleation free energies of the different models are dramatically reduced. This finding suggests that comparisons of experimental data and computational predictions are most meaningful at the same relative conditions and emphasizes the importancemore » of knowing the phase diagram of a given computational model, but such information is usually not available for models where the interaction energy is determined directly from electronic structure calculations.« less
Li-ion synaptic transistor for low power analog computing
Fuller, Elliot J.; Gabaly, Farid El; Leonard, Francois; ...
2016-11-22
Nonvolatile redox transistors (NVRTs) based upon Li-ion battery materials are demonstrated as memory elements for neuromorphic computer architectures with multi-level analog states, “write” linearity, low-voltage switching, and low power dissipation. Simulations of back propagation using the device properties reach ideal classification accuracy. Finally, physics-based simulations predict energy costs per “write” operation of <10 aJ when scaled to 200 nm × 200 nm.
ERIC Educational Resources Information Center
Rosengrant, David
2011-01-01
Multiple representations are a valuable tool to help students learn and understand physics concepts. Furthermore, representations help students learn how to think and act like real scientists. These representations include: pictures, free-body diagrams, energy bar charts, electrical circuits, and, more recently, computer simulations and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, O.B. Jr.; Berry, L.A.; Sheffield, J.
This annual report on fusion energy discusses the progress on work in the following main topics: toroidal confinement experiments; atomic physics and plasma diagnostics development; plasma theory and computing; plasma-materials interactions; plasma technology; superconducting magnet development; fusion engineering design center; materials research and development; and neutron transport. (LSP)
Grid Computing Environment using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Alanis, Fransisco; Mahmood, Akhtar
2003-10-01
Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
Relativistic Few-Body Hadronic Physics Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polyzou, Wayne
2016-06-20
The goal of this research proposal was to use ``few-body'' methods to understand the structure and reactions of systems of interacting hadrons (neutrons, protons, mesons, quarks) over a broad range of energy scales. Realistic mathematical models of few-hadron systems have the advantage that they are sufficiently simple that they can be solved with mathematically controlled errors. These systems are also simple enough that it is possible to perform complete accurate experimental measurements on these systems. Comparison between theory and experiment puts strong constraints on the structure of the models. Even though these systems are ``simple'', both the experiments and computationsmore » push the limits of technology. The important property of ``few-body'' systems is that the ``cluster property'' implies that the interactions that appear in few-body systems are identical to the interactions that appear in complicated many-body systems. Of particular interest are models that correctly describe physics at distance scales that are sensitive to the internal structure of the individual nucleons. The Heisenberg uncertainty principle implies that in order to be sensitive to physics on distance scales that are a fraction of the proton or neutron radius, a relativistic treatment of quantum mechanics is necessary. The research supported by this grant involved 30 years of effort devoted to studying all aspects of interacting two and three-body systems. Realistic interactions were used to compute bound states of two- and three-nucleon, and two- and three-quark systems. Scattering observables for these systems were computed for a broad range of energies - from zero energy scattering to few GeV scattering, where experimental evidence of sub-nucleon degrees of freedom is beginning to appear. Benchmark calculations were produced, which when compared with calculations of other groups provided an essential check on these complicated calculations. In addition to computing bound state properties and scattering cross section, we also computed electron scattering cross sections in few-nucleon and few-quark systems, which are sensitive to the electric currents in these systems. We produced the definitive review on article on relativistic quantum mechanics, which and been used by many groups. In addition we developed and tested many computational techniques are used by other groups. Many of these techniques have applications in other areas of physics. The research benefited by collaborations with physicists from many different institutions and countries. It also involved working with seventeen undergraduate and graduate students.« less
Energy Efficient Digital Logic Using Nanoscale Magnetic Devices
NASA Astrophysics Data System (ADS)
Lambson, Brian James
Increasing demand for information processing in the last 50 years has been largely satisfied by the steadily declining price and improving performance of microelectronic devices. Much of this progress has been made by aggressively scaling the size of semiconductor transistors and metal interconnects that microprocessors are built from. As devices shrink to the size regime in which quantum effects pose significant challenges, new physics may be required in order to continue historical scaling trends. A variety of new devices and physics are currently under investigation throughout the scientific and engineering community to meet these challenges. One of the more drastic proposals on the table is to replace the electronic components of information processors with magnetic components. Magnetic components are already commonplace in computers for their information storage capability. Unlike most electronic devices, magnetic materials can store data in the absence of a power supply. Today's magnetic hard disk drives can routinely hold billions of bits of information and are in widespread commercial use. Their ability to function without a constant power source hints at an intrinsic energy efficiency. The question we investigate in this dissertation is whether or not this advantage can be extended from information storage to the notoriously energy intensive task of information processing. Several proof-of-concept magnetic logic devices were proposed and tested in the past decade. In this dissertation, we build on the prior work by answering fundamental questions about how magnetic devices achieve such high energy efficiency and how they can best function in digital logic applications. The results of this analysis are used to suggest and test improvements to nanomagnetic computing devices. Two of our results are seen as especially important to the field of nanomagnetic computing: (1) we show that it is possible to operate nanomagnetic computers at the fundamental thermodyanimic limits of computation and (2) we develop a nanomagnet with a unique shape that is engineered to significantly improve the reliability of nanomagnetic logic.
Tokunaga-Nakawatase, Yuri; Nishigaki, Masakazu; Taru, Chiemi; Miyawaki, Ikuko; Nishida, Junko; Kosaka, Shiho; Sanada, Hiromi; Kazuma, Keiko
2014-10-01
To investigate the effect of a computer-supported indirect-form lifestyle-modification program using Lifestyle Intervention Support Software for Diabetes Prevention (LISS-DP), as a clinically feasible strategy for primary prevention, on diet and physical activity habits in adults with a family history of type 2 diabetes. This was a two-arm, randomized controlled trial: (1) lifestyle intervention (LI) group (n=70); (2) control (n=71). Healthy adults aged 30-60 years with a history of type 2 diabetes among their first-degree relatives were recruited. LI group received three times of lifestyle intervention using LISS-DP during six-month intervention period via mail. Lifestyle intervention group showed significantly greater decrease in energy intake six months after baseline, compared to control (-118.31 and -24.79 kcal/day, respectively, p=0.0099, Cohen's d=0.22), though the difference disappeared 1 year after from baseline. No difference was found in physical activity energy expenditure. A computer-based, non-face-to-face lifestyle intervention was effective on dietary habits, only during the intervention period. Further examination of the long-term effects of such intervention and physical activity is required. Copyright © 2014 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
Low-energy effective field theory below the electroweak scale: operators and matching
NASA Astrophysics Data System (ADS)
Jenkins, Elizabeth E.; Manohar, Aneesh V.; Stoffer, Peter
2018-03-01
The gauge-invariant operators up to dimension six in the low-energy effective field theory below the electroweak scale are classified. There are 70 Hermitian dimension-five and 3631 Hermitian dimension-six operators that conserve baryon and lepton number, as well as Δ B = ±Δ L = ±1, Δ L = ±2, and Δ L = ±4 operators. The matching onto these operators from the Standard Model Effective Field Theory (SMEFT) up to order 1 /Λ2 is computed at tree level. SMEFT imposes constraints on the coefficients of the low-energy effective theory, which can be checked experimentally to determine whether the electroweak gauge symmetry is broken by a single fundamental scalar doublet as in SMEFT. Our results, when combined with the one-loop anomalous dimensions of the low-energy theory and the one-loop anomalous dimensions of SMEFT, allow one to compute the low-energy implications of new physics to leading-log accuracy, and combine them consistently with high-energy LHC constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brower, Richard C.
This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less
Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.
2014-01-01
As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704
Petascale supercomputing to accelerate the design of high-temperature alloys
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...
2017-10-25
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
NASA Astrophysics Data System (ADS)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen
2017-12-01
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.
NASA Astrophysics Data System (ADS)
Prychynenko, Diana; Sitte, Matthias; Litzius, Kai; Krüger, Benjamin; Bourianoff, George; Kläui, Mathias; Sinova, Jairo; Everschor-Sitte, Karin
2018-01-01
Inspired by the human brain, there is a strong effort to find alternative models of information processing capable of imitating the high energy efficiency of neuromorphic information processing. One possible realization of cognitive computing involves reservoir computing networks. These networks are built out of nonlinear resistive elements which are recursively connected. We propose that a Skyrmion network embedded in magnetic films may provide a suitable physical implementation for reservoir computing applications. The significant key ingredient of such a network is a two-terminal device with nonlinear voltage characteristics originating from magnetoresistive effects, such as the anisotropic magnetoresistance or the recently discovered noncollinear magnetoresistance. The most basic element for a reservoir computing network built from "Skyrmion fabrics" is a single Skyrmion embedded in a ferromagnetic ribbon. In order to pave the way towards reservoir computing systems based on Skyrmion fabrics, we simulate and analyze (i) the current flow through a single magnetic Skyrmion due to the anisotropic magnetoresistive effect and (ii) the combined physics of local pinning and the anisotropic magnetoresistive effect.
NRV web knowledge base on low-energy nuclear physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpov, V., E-mail: karpov@jinr.ru; Denikin, A. S.; Alekseev, A. P.
Principles underlying the organization and operation of the NRV web knowledge base on low-energy nuclear physics (http://nrv.jinr.ru) are described. This base includes a vast body of digitized experimental data on the properties of nuclei and on cross sections for nuclear reactions that is combined with a wide set of interconnected computer programs for simulating complex nuclear dynamics, which work directly in the browser of a remote user. Also, the current situation in the realms of application of network information technologies in nuclear physics is surveyed. The potential of the NRV knowledge base is illustrated in detail by applying it tomore » the example of an analysis of the fusion of nuclei that is followed by the decay of the excited compound nucleus formed.« less
Energy regeneration model of self-consistent field of electron beams into electric power*
NASA Astrophysics Data System (ADS)
Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.
2016-04-01
We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.
Challenges to Software/Computing for Experimentation at the LHC
NASA Astrophysics Data System (ADS)
Banerjee, Sunanda
The demands of future high energy physics experiments towards software and computing have led the experiments to plan the related activities as a full-fledged project and to investigate new methodologies and languages to meet the challenges. The paths taken by the four LHC experiments ALICE, ATLAS, CMS and LHCb are coherently put together in an LHC-wide framework based on Grid technology. The current status and understandings have been broadly outlined.
Physics Computing '92: Proceedings of the 4th International Conference
NASA Astrophysics Data System (ADS)
de Groot, Robert A.; Nadrchal, Jaroslav
1993-04-01
The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants
Computationally-Guided Synthetic Control over Pore Size in Isostructural Porous Organic Cages
Slater, Anna G.; Reiss, Paul S.; Pulido, Angeles; ...
2017-06-20
The physical properties of 3-D porous solids are defined by their molecular geometry. Hence, precise control of pore size, pore shape, and pore connectivity are needed to tailor them for specific applications. However, for porous molecular crystals, the modification of pore size by adding pore-blocking groups can also affect crystal packing in an unpredictable way. This precludes strategies adopted for isoreticular metal-organic frameworks, where addition of a small group, such as a methyl group, does not affect the basic framework topology. Here, we narrow the pore size of a cage molecule, CC3, in a systematic way by introducing methyl groupsmore » into the cage windows. Computational crystal structure prediction was used to anticipate the packing preferences of two homochiral methylated cages, CC14-R and CC15-R, and to assess the structure-energy landscape of a CC15-R/CC3-S cocrystal, designed such that both component cages could be directed to pack with a 3-D, interconnected pore structure. The experimental gas sorption properties of these three cage systems agree well with physical properties predicted by computational energy-structure-function maps.« less
Computationally-Guided Synthetic Control over Pore Size in Isostructural Porous Organic Cages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, Anna G.; Reiss, Paul S.; Pulido, Angeles
The physical properties of 3-D porous solids are defined by their molecular geometry. Hence, precise control of pore size, pore shape, and pore connectivity are needed to tailor them for specific applications. However, for porous molecular crystals, the modification of pore size by adding pore-blocking groups can also affect crystal packing in an unpredictable way. This precludes strategies adopted for isoreticular metal-organic frameworks, where addition of a small group, such as a methyl group, does not affect the basic framework topology. Here, we narrow the pore size of a cage molecule, CC3, in a systematic way by introducing methyl groupsmore » into the cage windows. Computational crystal structure prediction was used to anticipate the packing preferences of two homochiral methylated cages, CC14-R and CC15-R, and to assess the structure-energy landscape of a CC15-R/CC3-S cocrystal, designed such that both component cages could be directed to pack with a 3-D, interconnected pore structure. The experimental gas sorption properties of these three cage systems agree well with physical properties predicted by computational energy-structure-function maps.« less
System, method and computer-readable medium for locating physical phenomena
Weseman, Matthew T [Idaho Falls, ID; Rohrbaugh, David T [Idaho Falls, ID; Richardson, John G [Idaho Falls, ID
2008-02-26
A method, system and computer product for detecting the location of a deformation of a structure includes baselining a defined energy transmitting characteristic for each of the plurality of laterally adjacent conductors attached to the structure. Each of the plurality of conductors includes a plurality of segments coupled in series and having an associated unit value representative of the defined energy transmitting characteristic. The plurality of laterally adjacent conductors includes a plurality of identity groups with each identity group including at least one of the plurality of segments from each of the plurality of conductors. Each of the plurality of conductors are monitored for a difference in the defined energy transmitting characteristic when compared with a baseline energy transmitting characteristic for each of the plurality of conductors. When the difference exceeds a threshold value, a location of the deformation along the structure is calculated.
Efficient free energy calculations of quantum systems through computer simulations
NASA Astrophysics Data System (ADS)
Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo
2009-03-01
In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)
John, Temitope M; Badejo, Joke A; Popoola, Segun I; Omole, David O; Odukoya, Jonathan A; Ajayi, Priscilla O; Aboyade, Mary; Atayero, Aderemi A
2018-06-01
This data article presents data of academic performances of undergraduate students in Science, Technology, Engineering and Mathematics (STEM) disciplines in Covenant University, Nigeria. The data shows academic performances of Male and Female students who graduated from 2010 to 2014. The total population of samples in the observation is 3046 undergraduates mined from Biochemistry (BCH), Building technology (BLD), Computer Engineering (CEN), Chemical Engineering (CHE), Industrial Chemistry (CHM), Computer Science (CIS), Civil Engineering (CVE), Electrical and Electronics Engineering (EEE), Information and Communication Engineering (ICE), Mathematics (MAT), Microbiology (MCB), Mechanical Engineering (MCE), Management and Information System (MIS), Petroleum Engineering (PET), Industrial Physics-Electronics and IT Applications (PHYE), Industrial Physics-Applied Geophysics (PHYG) and Industrial Physics-Renewable Energy (PHYR). The detailed dataset is made available in form of a Microsoft Excel spreadsheet in the supplementary material of this article.
HEP Community White Paper on Software Trigger and Event Reconstruction: Executive Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albrecht, Johannes; et al.
Realizing the physics programs of the planned and upgraded high-energy physics (HEP) experiments over the next 10 years will require the HEP community to address a number of challenges in the area of software and computing. For this reason, the HEP software community has engaged in a planning process over the past two years, with the objective of identifying and prioritizing the research and development required to enable the next generation of HEP detectors to fulfill their full physics potential. The aim is to produce a Community White Paper which will describe the community strategy and a roadmap for softwaremore » and computing research and development in HEP for the 2020s. The topics of event reconstruction and software triggers were considered by a joint working group and are summarized together in this document.« less
HEP Community White Paper on Software Trigger and Event Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albrecht, Johannes; et al.
Realizing the physics programs of the planned and upgraded high-energy physics (HEP) experiments over the next 10 years will require the HEP community to address a number of challenges in the area of software and computing. For this reason, the HEP software community has engaged in a planning process over the past two years, with the objective of identifying and prioritizing the research and development required to enable the next generation of HEP detectors to fulfill their full physics potential. The aim is to produce a Community White Paper which will describe the community strategy and a roadmap for softwaremore » and computing research and development in HEP for the 2020s. The topics of event reconstruction and software triggers were considered by a joint working group and are summarized together in this document.« less
NASA Astrophysics Data System (ADS)
Khan, Imad; Shafquatullah; Malik, M. Y.; Hussain, Arif; Khan, Mair
Current work highlights the computational aspects of MHD Carreau nanofluid flow over an inclined stretching cylinder with convective boundary conditions and Joule heating. The mathematical modeling of physical problem yields nonlinear set of partial differential equations. A suitable scaling group of variables is employed on modeled equations to convert them into non-dimensional form. The integration scheme Runge-Kutta-Fehlberg on the behalf of shooting technique is utilized to solve attained set of equations. The interesting aspects of physical problem (linear momentum, energy and nanoparticles concentration) are elaborated under the different parametric conditions through graphical and tabular manners. Additionally, the quantities (local skin friction coefficient, local Nusselt number and local Sherwood number) which are responsible to dig out the physical phenomena in the vicinity of stretched surface are computed and delineated by varying controlling flow parameters.
Mock Data Challenge for the MPD/NICA Experiment on the HybriLIT Cluster
NASA Astrophysics Data System (ADS)
Gertsenberger, Konstantin; Rogachevsky, Oleg
2018-02-01
Simulation of data processing before receiving first experimental data is an important issue in high-energy physics experiments. This article presents the current Event Data Model and the Mock Data Challenge for the MPD experiment at the NICA accelerator complex which uses ongoing simulation studies to exercise in a stress-testing the distributed computing infrastructure and experiment software in the full production environment from simulated data through the physical analysis.
Interactive Heat Transfer Simulations for Everyone
ERIC Educational Resources Information Center
Xie, Charles
2012-01-01
Heat transfer is widely taught in secondary Earth science and physics. Researchers have identified many misconceptions related to heat and temperature. These misconceptions primarily stem from hunches developed in everyday life (though the confusions in terminology often worsen them). Interactive computer simulations that visualize thermal energy,…
Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment
NASA Technical Reports Server (NTRS)
Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.
Overview of Particle and Heavy Ion Transport Code System PHITS
NASA Astrophysics Data System (ADS)
Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit
2014-06-01
A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.
Improved transition path sampling methods for simulation of rare events
NASA Astrophysics Data System (ADS)
Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.
2008-04-01
The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.
Free energy decomposition of protein-protein interactions.
Noskov, S Y; Lim, C
2001-08-01
A free energy decomposition scheme has been developed and tested on antibody-antigen and protease-inhibitor binding for which accurate experimental structures were available for both free and bound proteins. Using the x-ray coordinates of the free and bound proteins, the absolute binding free energy was computed assuming additivity of three well-defined, physical processes: desolvation of the x-ray structures, isomerization of the x-ray conformation to a nearby local minimum in the gas-phase, and subsequent noncovalent complex formation in the gas phase. This free energy scheme, together with the Generalized Born model for computing the electrostatic solvation free energy, yielded binding free energies in remarkable agreement with experimental data. Two assumptions commonly used in theoretical treatments; viz., the rigid-binding approximation (which assumes no conformational change upon complexation) and the neglect of vdW interactions, were found to yield large errors in the binding free energy. Protein-protein vdW and electrostatic interactions between complementary surfaces over a relatively large area (1400--1700 A(2)) were found to drive antibody-antigen and protease-inhibitor binding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Vinod
2017-05-05
High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly butmore » important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.« less
NASA Astrophysics Data System (ADS)
Khalid, Asma; Khan, Ilyas; Khan, Arshad; Shafie, Sharidan
2018-06-01
The intention here is to investigate the effects of wall couple stress with energy and concentration transfer in magnetohydrodynamic (MHD) flow of a micropolar fluid embedded in a porous medium. The mathematical model contains the set of linear conservation forms of partial differential equations. Laplace transforms and convolution technique are used for computation of exact solutions of velocity, microrotations, temperature and concentration equations. Numerical values of skin friction, couple wall stress, Nusselt and Sherwood numbers are also computed. Characteristics for the significant variables on the physical quantities are graphically discussed. Comparison with previously published work in limiting sense shows an excellent agreement.
K-->pipi amplitudes from lattice QCD with a light charm quark.
Giusti, L; Hernández, P; Laine, M; Pena, C; Wennekers, J; Wittig, H
2007-02-23
We compute the leading-order low-energy constants of the DeltaS=1 effective weak Hamiltonian in the quenched approximation of QCD with up, down, strange, and charm quarks degenerate and light. They are extracted by comparing the predictions of finite-volume chiral perturbation theory with lattice QCD computations of suitable correlation functions carried out with quark masses ranging from a few MeV up to half of the physical strange mass. We observe a DeltaI=1/2 enhancement in this corner of the parameter space of the theory. Although matching with the experimental result is not observed for the DeltaI=1/2 amplitude, our computation suggests large QCD contributions to the physical DeltaI=1/2 rule in the GIM limit, and represents the first step to quantify the role of the charm-quark mass in K-->pipi amplitudes. The use of fermions with an exact chiral symmetry is an essential ingredient in our computation.
Diversity in computing technologies and strategies for dynamic resource allocation
Garzoglio, G.; Gutsche, O.
2015-12-23
Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.
Algorithm for fast event parameters estimation on GEM acquired data
NASA Astrophysics Data System (ADS)
Linczuk, Paweł; Krawczyk, Rafał D.; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Chernyshova, Maryna; Czarski, Tomasz
2016-09-01
We present study of a software-hardware environment for developing fast computation with high throughput and low latency methods, which can be used as back-end in High Energy Physics (HEP) and other High Performance Computing (HPC) systems, based on high amount of input from electronic sensor based front-end. There is a parallelization possibilities discussion and testing on Intel HPC solutions with consideration of applications with Gas Electron Multiplier (GEM) measurement systems presented in this paper.
Final Technical Report for ARRA Funding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rusack, Roger; Mans, Jeremiah; Poling, Ronald
Final technical report of the University of Minnesota experimental high energy physics group for ARRA support. The Cryogenic Dark Matter Experiment (CDMS) used the funds received to construct a new passive shield to protect a high-purity germanium detector located in the Soudan mine in Northern Minnesota from cosmic rays. The BESIII and the CMS groups purchased computing hardware to assemble computer farms for data analysis and to generate large volumes of simulated data for comparison with the data collected.
Direct numerical simulation of sheared turbulent flow
NASA Technical Reports Server (NTRS)
Harris, Vascar G.
1994-01-01
The summer assignment to study sheared turbulent flow was divided into three phases which were: (1) literature survey, (2) computational familiarization, and (3) pilot computational studies. The governing equations of fluid dynamics or Navier-Stokes equations describe the velocity, pressure, and density as functions of position and time. In principle, when combined with conservation equations for mass, energy, and thermodynamic state of the fluid a determinate system could be obtained. In practice the Navier-Stokes equations have not been solved due to the nonlinear nature and complexity of these equations. Consequently, the importance of experiments in gaining insight for understanding the physics of the problem has been an ongoing process. Reasonable computer simulations of the problem have occured as the computational speed and storage of computers has evolved. The importance of the microstructure of the turbulence dictates the need for high resolution grids in extracting solutions which contain the physical mechanisms which are essential to a successful simulation. The recognized breakthrough occurred as a result of the pioneering work of Orzag and Patterson in which the Navier-Stokes equations were solved numerically utilizing a time saving toggling technique between physical and wave space, known as a spectral method. An equally analytically unsolvable problem, containing the same quasi-chaotic nature as turbulence, is known as the three body problem which was studied computationally as a first step this summer. This study was followed by computations of a two dimensional (2D) free shear layer.
Benaglia, Andrea; Auffray, Etiennette; Lecoq, Paul; ...
2016-04-20
The performance of hadronic calorimeters will be a key parameter at the next generation of High Energy Physics accelerators. A detector combining fine granularity with excellent timing information would prove beneficial for the reconstruction of both jets and electromagnetic particles with high energy resolution. In this work, the space and time structure of high energy showers is studied by means of a Geant4-based simulation toolkit. In particular, the relevant time scales of the different physics phenomena contributing to the energy loss are investigated. A correlation between the fluctuations of the energy deposition of high energy hadrons and the time developmentmore » of the showers is observed, which allows for an event-by-event correction to be computed to improve the energy resolution of the calorimeter. Lastly, these studies are intended to set the basic requirements for the development of a new-concept, total absorption time-imaging calorimeter, which seems now within reach thanks to major technological advancements in the production of fast scintillating materials and compact photodetectors.« less
Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-01
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
Well-tempered metadynamics: a smoothly converging and tunable free-energy method.
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-18
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Rafael Alves; Dundovic, Andrej; Sigl, Guenter
2016-05-01
We present the simulation framework CRPropa version 3 designed for efficient development of astrophysical predictions for ultra-high energy particles. Users can assemble modules of the most relevant propagation effects in galactic and extragalactic space, include their own physics modules with new features, and receive on output primary and secondary cosmic messengers including nuclei, neutrinos and photons. In extension to the propagation physics contained in a previous CRPropa version, the new version facilitates high-performance computing and comprises new physical features such as an interface for galactic propagation using lensing techniques, an improved photonuclear interaction calculation, and propagation in time dependent environmentsmore » to take into account cosmic evolution effects in anisotropy studies and variable sources. First applications using highlighted features are presented as well.« less
Analysis of physics-based preconditioning for single-phase subchannel equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansel, J. E.; Ragusa, J. C.; Allu, S.
2013-07-01
The (single-phase) subchannel approximations are used throughout nuclear engineering to provide an efficient flow simulation because the computational burden is much smaller than for computational fluid dynamics (CFD) simulations, and empirical relations have been developed and validated to provide accurate solutions in appropriate flow regimes. Here, the subchannel equations have been recast in a residual form suitable for a multi-physics framework. The Eigen spectrum of the Jacobian matrix, along with several potential physics-based preconditioning approaches, are evaluated, and the the potential for improved convergence from preconditioning is assessed. The physics-based preconditioner options include several forms of reduced equations that decouplemore » the subchannels by neglecting crossflow, conduction, and/or both turbulent momentum and energy exchange between subchannels. Eigen-scopy analysis shows that preconditioning moves clusters of eigenvalues away from zero and toward one. A test problem is run with and without preconditioning. Without preconditioning, the solution failed to converge using GMRES, but application of any of the preconditioners allowed the solution to converge. (authors)« less
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Tian, Z; Pompos, A
2016-06-15
Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondarymore » electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.« less
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-08-18
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Martiniani, Stefano; Schrenk, K Julian; Stevenson, Jacob D; Wales, David J; Frenkel, Daan
2016-01-01
We present a numerical calculation of the total number of disordered jammed configurations Ω of N repulsive, three-dimensional spheres in a fixed volume V. To make these calculations tractable, we increase the computational efficiency of the approach of Xu et al. [Phys. Rev. Lett. 106, 245502 (2011)10.1103/PhysRevLett.106.245502] and Asenjo et al. [Phys. Rev. Lett. 112, 098002 (2014)10.1103/PhysRevLett.112.098002] and we extend the method to allow computation of the configurational entropy as a function of pressure. The approach that we use computes the configurational entropy by sampling the absolute volume of basins of attraction of the stable packings in the potential energy landscape. We find a surprisingly strong correlation between the pressure of a configuration and the volume of its basin of attraction in the potential energy landscape. This relation is well described by a power law. Our methodology to compute the number of minima in the potential energy landscape should be applicable to a wide range of other enumeration problems in statistical physics, string theory, cosmology, and machine learning that aim to find the distribution of the extrema of a scalar cost function that depends on many degrees of freedom.
Mazziotti, David A
2016-10-07
A central challenge of physics is the computation of strongly correlated quantum systems. The past ten years have witnessed the development and application of the variational calculation of the two-electron reduced density matrix (2-RDM) without the wave function. In this Letter we present an orders-of-magnitude improvement in the accuracy of 2-RDM calculations without an increase in their computational cost. The advance is based on a low-rank, dual formulation of an important constraint on the 2-RDM, the T2 condition. Calculations are presented for metallic chains and a cadmium-selenide dimer. The low-scaling T2 condition will have significant applications in atomic and molecular, condensed-matter, and nuclear physics.
NASA Astrophysics Data System (ADS)
Mazziotti, David A.
2016-10-01
A central challenge of physics is the computation of strongly correlated quantum systems. The past ten years have witnessed the development and application of the variational calculation of the two-electron reduced density matrix (2-RDM) without the wave function. In this Letter we present an orders-of-magnitude improvement in the accuracy of 2-RDM calculations without an increase in their computational cost. The advance is based on a low-rank, dual formulation of an important constraint on the 2-RDM, the T 2 condition. Calculations are presented for metallic chains and a cadmium-selenide dimer. The low-scaling T 2 condition will have significant applications in atomic and molecular, condensed-matter, and nuclear physics.
Jin, Miaomiao; Cheng, Long; Li, Yi; Hu, Siyu; Lu, Ke; Chen, Jia; Duan, Nian; Wang, Zhuorui; Zhou, Yaxiong; Chang, Ting-Chang; Miao, Xiangshui
2018-06-27
Owing to the capability of integrating the information storage and computing in the same physical location, in-memory computing with memristors has become a research hotspot as a promising route for non von Neumann architecture. However, it is still a challenge to develop high performance devices as well as optimized logic methodologies to realize energy-efficient computing. Herein, filamentary Cu/GeTe/TiN memristor is reported to show satisfactory properties with nanosecond switching speed (< 60 ns), low voltage operation (< 2 V), high endurance (>104 cycles) and good retention (>104 s @85℃). It is revealed that the charge carrier conduction mechanisms in high resistance and low resistance states are Schottky emission and hopping transport between the adjacent Cu clusters, respectively, based on the analysis of current-voltage behaviors and resistance-temperature characteristics. An intuitive picture is given to describe the dynamic processes of resistive switching. Moreover, based on the basic material implication (IMP) logic circuit, we proposed a reconfigurable logic method and experimentally implemented IMP, NOT, OR, and COPY logic functions. Design of a one-bit full adder with reduction in computational sequences and its validation in simulation further demonstrate the potential practical application. The results provide important progress towards understanding of resistive switching mechanism and realization of energy-efficient in-memory computing architecture. © 2018 IOP Publishing Ltd.
Computational materials design for energy applications
NASA Astrophysics Data System (ADS)
Ozolins, Vidvuds
2013-03-01
General adoption of sustainable energy technologies depends on the discovery and development of new high-performance materials. For instance, waste heat recovery and electricity generation via the solar thermal route require bulk thermoelectrics with a high figure of merit (ZT) and thermal stability at high-temperatures. Energy recovery applications (e.g., regenerative braking) call for the development of rapidly chargeable systems for electrical energy storage, such as electrochemical supercapacitors. Similarly, use of hydrogen as vehicular fuel depends on the ability to store hydrogen at high volumetric and gravimetric densities, as well as on the ability to extract it at ambient temperatures at sufficiently rapid rates. We will discuss how first-principles computational methods based on quantum mechanics and statistical physics can drive the understanding, improvement and prediction of new energy materials. We will cover prediction and experimental verification of new earth-abundant thermoelectrics, transition metal oxides for electrochemical supercapacitors, and kinetics of mass transport in complex metal hydrides. Research has been supported by the US Department of Energy under grant Nos. DE-SC0001342, DE-SC0001054, DE-FG02-07ER46433, and DE-FC36-08GO18136.
Ground State of the Universe and the Cosmological Constant. A Nonperturbative Analysis.
Husain, Viqar; Qureshi, Babar
2016-02-12
The physical Hamiltonian of a gravity-matter system depends on the choice of time, with the vacuum naturally identified as its ground state. We study the expanding Universe with scalar field in the volume time gauge. We show that the vacuum energy density computed from the resulting Hamiltonian is a nonlinear function of the cosmological constant and time. This result provides a new perspective on the relation between time, the cosmological constant, and vacuum energy.
NASA Astrophysics Data System (ADS)
Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick
2016-04-01
We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.
NASA Astrophysics Data System (ADS)
Matsubara, Masahiko; Bellotti, Enrico
2017-05-01
Various forms of carbon based complexes in GaN are studied with first-principles calculations employing Heyd-Scuseria-Ernzerhof hybrid functionals within the framework of the density functional theory. We consider carbon complexes made of the combinations of single impurities, i.e., CN-CGa, CI-CN , and CI-CGa , where CN, CGa , and CI denote C substituting nitrogen, C substituting gallium, and interstitial C, respectively, and of neighboring gallium/nitrogen vacancies ( VGa / VN ), i.e., CN-VGa and CGa-VN . Formation energies are computed for all these configurations with different charge states after full geometry optimizations. From our calculated formation energies, thermodynamic transition levels are evaluated, which are related to the thermal activation energies observed in experimental techniques such as deep level transient spectroscopy. Furthermore, the lattice relaxation energies (Franck-Condon shift) are computed to obtain optical activation energies, which are observed in experimental techniques such as deep level optical spectroscopy. We compare our calculated values of activation energies with the energies of experimentally observed C-related trap levels and identify the physical origins of these traps, which were unknown before.
Higher order alchemical derivatives from coupled perturbed self-consistent field theory.
Lesiuk, Michał; Balawender, Robert; Zachara, Janusz
2012-01-21
We present an analytical approach to treat higher order derivatives of Hartree-Fock (HF) and Kohn-Sham (KS) density functional theory energy in the Born-Oppenheimer approximation with respect to the nuclear charge distribution (so-called alchemical derivatives). Modified coupled perturbed self-consistent field theory is used to calculate molecular systems response to the applied perturbation. Working equations for the second and the third derivatives of HF/KS energy are derived. Similarly, analytical forms of the first and second derivatives of orbital energies are reported. The second derivative of Kohn-Sham energy and up to the third derivative of Hartree-Fock energy with respect to the nuclear charge distribution were calculated. Some issues of practical calculations, in particular the dependence of the basis set and Becke weighting functions on the perturbation, are considered. For selected series of isoelectronic molecules values of available alchemical derivatives were computed and Taylor series expansion was used to predict energies of the "surrounding" molecules. Predicted values of energies are in unexpectedly good agreement with the ones computed using HF/KS methods. Presented method allows one to predict orbital energies with the error less than 1% or even smaller for valence orbitals. © 2012 American Institute of Physics
Comparison of x-ray cross sections for diagnostic and therapeutic medical physics.
Boone, J M; Chavez, A E
1996-12-01
The purpose of this technical report is to make available an up-to-date source of attenuation coefficient data to the medical physics community, and to compare these data with other more familiar sources. Data files from Lawrence Livermore National Laboratory (in Livermore, CA) were truncated to match the needs of the medical physics community, and an interpolation routine was written to calculate a continuous set of cross sections spanning energies from 1 keV to 50 MeV. Coefficient data are available for elements Z = 1 through Z = 100. Values for mass attenuation coefficients, mass-energy-transfer coefficients, and mass-energy absorption coefficients are produced by a single computer subroutine. In addition to total interaction cross sections, the cross sections for photoelectric, Rayleigh, Compton, pair, and some triplet interactions are also produced by this single program. The coefficients were compared to the 1970 data of Storm and Israel over the energy interval from 1 to 1000 keV; for elements 10, 20, 30, 40, 50, 60, 70, and 80, the average positive difference between the Storm and Israel coefficients and the coefficients reported here are 1.4%, 2.7%, and 2.6%, for the mass attenuation, mass energy-transfer, and mass-energy absorption coefficients, respectively. The 1969 data compilation of mass attenuation coefficients from McMaster et al. were also compared with the newer LLNL data. Over the energy region from 10 keV to 1000 keV, and from elements Z = 1 to Z = 82 (inclusive), the overall average difference was 1.53% (sigma = 0.85%). While the overall average difference was small, there was larger variation (> 5%) between cross sections for some elements. In addition to coefficient data, other useful data such as the density, atomic weight, K, L1, L2, L3, M, and N edges, and numerous characteristic emission energies are output by the program, depending on a single input variable. The computer source code, written in C, can be accessed and downloaded from the World Wide Web at: http:@www.aip.org/epaps/epaps.html [E-MPHSA-23-1977].
Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes
NASA Astrophysics Data System (ADS)
Piro, Markus Hans Alexander
Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system components at each iterative step, and the objective is to minimize the residuals of the mass balance equations. Several numerical advantages are achieved through this simplification. In particular, computational expense is reduced and the rate of convergence is enhanced. Furthermore, the software has demonstrated the ability to solve systems involving as many as 118 component elements. An early version of the code has already been integrated into the Advanced Multi-Physics (AMP) code under development by the Oak Ridge National Laboratory, Los Alamos National Laboratory, Idaho National Laboratory and Argonne National Laboratory. Keywords: Engineering, Nuclear -- 0552, Engineering, Material Science -- 0794, Chemistry, Mathematics -- 0405, Computer Science -- 0984
NASA Astrophysics Data System (ADS)
Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer
2018-03-01
Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).
[Experimental nuclear physics]. Annual report 1988
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1988-05-01
This is the May 1988 annual report of the Nuclear Physics Laboratory of the University of Washington. It contains chapters on astrophysics, giant resonances, heavy ion induced reactions, fundamental symmetries, polarization in nuclear reactions, medium energy reactions, accelerator mass spectrometry (AMS), research by outside users, Van de Graaff and ion sources, the Laboratory`s booster linac project work, instrumentation, and computer systems. An appendix lists Laboratory personnel, Ph.D. degrees granted in the 1987-88 academic year, and publications. Refs., 27 figs., 4 tabs.
[Experimental nuclear physics]. Annual report 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1989-04-01
This is the April 1989 annual report of the Nuclear Physics Labortaory of the University of Washington. It contains chapters on astrophysics, giant resonances, heavy ion induced reactions, fundamental symmetries, polarization in nuclear reactions, medium energy reactions, accelerator mass spectrometry (AMS), research by outside users, Van de Graaff and ion sources, computer systems, instrumentation, and the Laboratory`s booster linac work. An appendix lists Laboratory personnel, Ph.D. degrees granted in the 1988-1989 academic year, and publications. Refs., 23 figs., 3 tabs.
Social energy: mining energy from the society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jun Jason; Gao, David Wenzhong; Zhang, Yingchen
The inherent nature of energy, i.e., physicality, sociality and informatization, implies the inevitable and intensive interaction between energy systems and social systems. From this perspective, we define 'social energy' as a complex sociotechnical system of energy systems, social systems and the derived artificial virtual systems which characterize the intense intersystem and intra-system interactions. The recent advancement in intelligent technology, including artificial intelligence and machine learning technologies, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies, enables the possibility of substantial advancement in socio-technical system optimization, scheduling, control and management. In thismore » paper, we provide a discussion on the nature of energy, and then propose the concept and intention of social energy systems for electrical power. A general methodology of establishing and investigating social energy is proposed, which is based on the ACP approach, i.e., 'artificial systems' (A), 'computational experiments' (C) and 'parallel execution' (P), and parallel system methodology. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept. In the concluding remarks, we discuss the technical pathway, in both social and nature sciences, to social energy, and our vision on its future.« less
Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code
NASA Technical Reports Server (NTRS)
Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.
2003-01-01
Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.
Additions and improvements to the high energy density physics capabilities in the FLASH code
NASA Astrophysics Data System (ADS)
Lamb, D.; Bogale, A.; Feister, S.; Flocke, N.; Graziani, C.; Khiar, B.; Laune, J.; Tzeferacos, P.; Walker, C.; Weide, K.
2017-10-01
FLASH is an open-source, finite-volume Eulerian, spatially-adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities exist in FLASH, which make it a powerful open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. We describe several non-ideal MHD capabilities that are being added to FLASH, including the Hall and Nernst effects, implicit resistivity, and a circuit model, which will allow modeling of Z-pinch experiments. We showcase the ability of FLASH to simulate Thomson scattering polarimetry, which measures Faraday due to the presence of magnetic fields, as well as proton radiography, proton self-emission, and Thomson scattering diagnostics. Finally, we describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at U. Chicago by DOE NNSA ASC through the Argonne Institute for Computing in Science under FWP 57789; DOE NNSA under NLUF Grant DE-NA0002724; DOE SC OFES Grant DE-SC0016566; and NSF Grant PHY-1619573.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Gulhan, Ali; Aftosmis, Michael; Brock, Joseph; Mathias, Donovan; Need, Dominic; Rodriguez, David; Seltner, Patrick; Stern, Eric; Wiles, Sebastian
2017-01-01
An airburst from a large asteroid during entry can cause significant ground damage. The damage depends on the energy and the altitude of airburst. Breakup of asteroids into fragments and their lateral spread have been observed. Modeling the underlying physics of fragmented bodies interacting at hypersonic speeds and the spread of fragments is needed for a true predictive capability. Current models use heuristic arguments and assumptions such as pancaking or point source explosive energy release at pre-determined altitude or an assumed fragmentation spread rate to predict airburst damage. A multi-year collaboration between German Aerospace Center (DLR) and NASA has been established to develop validated computational tools to address the above challenge.
Photonics Applications and Web Engineering: WILGA 2017
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2017-08-01
XLth Wilga Summer 2017 Symposium on Photonics Applications and Web Engineering was held on 28 May-4 June 2017. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, modern optics, mechatronics, applied physics, electronics technologies and applications. There were presented around 300 oral and poster papers in a few main topical tracks, which are traditional for Wilga, including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Things, measurement systems for astronomy, high energy physics experiments, and other. The paper is a traditional introduction to the 2017 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations. This year Symposium was divided to the following topical sessions/conferences: Optics, Optoelectronics and Photonics, Computational and Artificial Intelligence, Biomedical Applications, Astronomical and High Energy Physics Experiments Applications, Material Research and Engineering, and Advanced Photonics and Electronics Applications in Research and Industry.
Burnet, Neil G; Scaife, Jessica E; Romanchikova, Marina; Thomas, Simon J; Bates, Amy M; Wong, Emma; Noble, David J; Shelley, Leila EA; Bond, Simon J; Forman, Julia R; Hoole, Andrew CF; Barnett, Gillian C; Brochu, Frederic M; Simmons, Michael PD; Jena, Raj; Harrison, Karl; Yeap, Ping Lin; Drew, Amelia; Silvester, Emma; Elwood, Patrick; Pullen, Hannah; Sultana, Andrew; Seah, Shannon YK; Wilson, Megan Z; Russell, Simon G; Benson, Richard J; Rimmer, Yvonne L; Jefferies, Sarah J; Taku, Nicolette; Gurnell, Mark; Powlson, Andrew S; Schönlieb, Carola-Bibiane; Cai, Xiaohao; Sutcliffe, Michael PF; Parker, Michael A
2017-01-01
The VoxTox research programme has applied expertise from the physical sciences to the problem of radiotherapy toxicity, bringing together expertise from engineering, mathematics, high energy physics (including the Large Hadron Collider), medical physics and radiation oncology. In our initial cohort of 109 men treated with curative radiotherapy for prostate cancer, daily image guidance computed tomography (CT) scans have been used to calculate delivered dose to the rectum, as distinct from planned dose, using an automated approach. Clinical toxicity data have been collected, allowing us to address the hypothesis that delivered dose provides a better predictor of toxicity than planned dose. PMID:29177202
Anharmonic effects in simple physical models: introducing undergraduates to nonlinearity
NASA Astrophysics Data System (ADS)
Christian, J. M.
2017-09-01
Given the pervasive character of nonlinearity throughout the physical universe, a case is made for introducing undergraduate students to its consequences and signatures earlier rather than later. The dynamics of two well-known systems—a spring and a pendulum—are reviewed when the standard textbook linearising assumptions are relaxed. Some qualitative effects of nonlinearity can be anticipated from symmetry (e.g., inspection of potential energy functions), and further physical insight gained by applying a simple successive-approximation method that might be taught in parallel with courses on classical mechanics, ordinary differential equations, and computational physics. We conclude with a survey of how these ideas have been deployed on programmes at a UK university.
Burnet, Neil G; Scaife, Jessica E; Romanchikova, Marina; Thomas, Simon J; Bates, Amy M; Wong, Emma; Noble, David J; Shelley, Leila Ea; Bond, Simon J; Forman, Julia R; Hoole, Andrew Cf; Barnett, Gillian C; Brochu, Frederic M; Simmons, Michael Pd; Jena, Raj; Harrison, Karl; Yeap, Ping Lin; Drew, Amelia; Silvester, Emma; Elwood, Patrick; Pullen, Hannah; Sultana, Andrew; Seah, Shannon Yk; Wilson, Megan Z; Russell, Simon G; Benson, Richard J; Rimmer, Yvonne L; Jefferies, Sarah J; Taku, Nicolette; Gurnell, Mark; Powlson, Andrew S; Schönlieb, Carola-Bibiane; Cai, Xiaohao; Sutcliffe, Michael Pf; Parker, Michael A
2017-06-01
The VoxTox research programme has applied expertise from the physical sciences to the problem of radiotherapy toxicity, bringing together expertise from engineering, mathematics, high energy physics (including the Large Hadron Collider), medical physics and radiation oncology. In our initial cohort of 109 men treated with curative radiotherapy for prostate cancer, daily image guidance computed tomography (CT) scans have been used to calculate delivered dose to the rectum, as distinct from planned dose, using an automated approach. Clinical toxicity data have been collected, allowing us to address the hypothesis that delivered dose provides a better predictor of toxicity than planned dose.
Publications of LASL research, 1974
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, A.K.
1975-05-01
This bibliography includes Los Alamos Scientific Laboratory reports, papers released as non-Los Alamos reports, journal articles, books, chapters of books, conference papers (whether published separately or as part of conference proceedings issued as books or reports), papers published in congressional hearings, theses, and U. S. patents. Publications by LASL authors which are not records of Laboratory-sponsored work are included when the Library becomes aware of them. The entries are arranged in sections by broad subject categories; within each section they are alphabetical by title. The following subject categories are included: aerospace studies; analytical technology; astrophysics; atomic and molecular physics, equationmore » of state, opacity; biology and medicine; chemical dynamics and kinetics; chemistry; cryogenics; crystallography; CTR and plasma studies; earth science and engineering; energy (non-nuclear); engineering and equipment; EPR, ESR, NMR studies; explosives and detonations; fission physics; health and safety; hydrodynamics and radiation transport; instruments; lasers; mathematics and computers; medium-energy physics; metallurgy and ceramics technology; neutronic and criticality studies; nuclear physics; nuclear safeguards; physics; reactor technology; solid state science; and miscellaneous (including Project Rover). Author, numerical and KWIC indexes are included. (RWR)« less
Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo
2018-04-01
We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.
Dynamic VMs placement for energy efficiency by PSO in cloud computing
NASA Astrophysics Data System (ADS)
Dashti, Seyed Ebrahim; Rahmani, Amir Masoud
2016-03-01
Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.
Calculation of protein-ligand binding affinities.
Gilson, Michael K; Zhou, Huan-Xiang
2007-01-01
Accurate methods of computing the affinity of a small molecule with a protein are needed to speed the discovery of new medications and biological probes. This paper reviews physics-based models of binding, beginning with a summary of the changes in potential energy, solvation energy, and configurational entropy that influence affinity, and a theoretical overview to frame the discussion of specific computational approaches. Important advances are reported in modeling protein-ligand energetics, such as the incorporation of electronic polarization and the use of quantum mechanical methods. Recent calculations suggest that changes in configurational entropy strongly oppose binding and must be included if accurate affinities are to be obtained. The linear interaction energy (LIE) and molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) methods are analyzed, as are free energy pathway methods, which show promise and may be ready for more extensive testing. Ultimately, major improvements in modeling accuracy will likely require advances on multiple fronts, as well as continued validation against experiment.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
ASCR Workshop on Quantum Computing for Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward
This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms formore » linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.« less
Records for conversion of laser energy to nuclear energy in exploding nanostructures
NASA Astrophysics Data System (ADS)
Jortner, Joshua; Last, Isidore
2017-09-01
Table-top nuclear fusion reactions in the chemical physics laboratory can be driven by high-energy dynamics of Coulomb exploding, multicharged, deuterium containing nanostructures generated by ultraintense, femtosecond, near-infrared laser pulses. Theoretical-computational studies of table-top laser-driven nuclear fusion of high-energy (up to 15 MeV) deuterons with 7Li, 6Li and D nuclei demonstrate the attainment of high fusion yields within a source-target reaction design, which constitutes the highest table-top fusion efficiencies obtained up to date. The conversion efficiency of laser energy to nuclear energy (0.1-1.0%) for table-top fusion is comparable to that for DT fusion currently accomplished for 'big science' inertial fusion setups.
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...
2017-08-17
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
NASA Astrophysics Data System (ADS)
Cheng, Fuqiang; Hong, Yanji; Li, Qian; Wen, Ming
2011-11-01
Laser thrusters with a single nozzle, e.g. parabolic or conical, failed to constrict the flow field of high pressure effectively, resulting in poor propulsive performance. Under the condition of air-breathing mode, parabolic thruster models with an elongate cylinder nozzle were studied numerically by building a physical computation model. Initially, to verify the computation model, the influence of cylinder length on the momentum coupling coefficient was computed and compared with the experiments, which shows a good congruence. A model of diameter 20 mm and cylindrical length 80 mm obtains about 627.7 N/MW at single pulse energy density 1.5 J/cm2. Then, the influence of expanding angle of the parabolic nozzle on propulsion performance was gained for different laser pulse energies, and the evolution process of the flow field was analyzed. The results show: as the expanding angel increases, the momentum coupling coefficient increases remarkably at first and descends relative slowly after reaching a peak value; moreover, the peak positions stay constant around 33° with little variation when laser energy differs.
Heterogeneous scalable framework for multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Karla Vanessa
2013-09-01
Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computermore » platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.« less
One dimensional heavy ion beam transport: Energy independent model. M.S. Thesis
NASA Technical Reports Server (NTRS)
Farhat, Hamidullah
1990-01-01
Attempts are made to model the transport problem for heavy ion beams in various targets, employing the current level of understanding of the physics of high-charge and energy (HZE) particle interaction with matter are made. An energy independent transport model, with the most simplified assumptions and proper parameters is presented. The first and essential assumption in this case (energy independent transport) is the high energy characterization of the incident beam. The energy independent equation is solved and application is made to high energy neon (NE-20) and iron (FE-56) beams in water. The numerical solutions is given and compared to a numerical solution to determine the accuracy of the model. The lower limit energy for neon and iron to be high energy beams is calculated due to Barkas and Burger theory by LBLFRG computer program. The calculated values in the density range of interest (50 g/sq cm) of water are: 833.43 MeV/nuc for neon and 1597.68 MeV/nuc for iron. The analytical solutions of the energy independent transport equation gives the flux of different collision terms. The fluxes of individual collision terms are given and the total fluxes are shown in graphs relative to different thicknesses of water. The values for fluxes are calculated by the ANASTP computer code.
A convolutional neural network neutrino event classifier
Aurisano, A.; Radovic, A.; Rocco, D.; ...
2016-09-01
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
A convolutional neural network neutrino event classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, A.; Radovic, A.; Rocco, D.
Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less
Perspective: Machine learning potentials for atomistic simulations
NASA Astrophysics Data System (ADS)
Behler, Jörg
2016-11-01
Nowadays, computer simulations have become a standard tool in essentially all fields of chemistry, condensed matter physics, and materials science. In order to keep up with state-of-the-art experiments and the ever growing complexity of the investigated problems, there is a constantly increasing need for simulations of more realistic, i.e., larger, model systems with improved accuracy. In many cases, the availability of sufficiently efficient interatomic potentials providing reliable energies and forces has become a serious bottleneck for performing these simulations. To address this problem, currently a paradigm change is taking place in the development of interatomic potentials. Since the early days of computer simulations simplified potentials have been derived using physical approximations whenever the direct application of electronic structure methods has been too demanding. Recent advances in machine learning (ML) now offer an alternative approach for the representation of potential-energy surfaces by fitting large data sets from electronic structure calculations. In this perspective, the central ideas underlying these ML potentials, solved problems and remaining challenges are reviewed along with a discussion of their current applicability and limitations.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-06-27
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-01-01
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946
Mechanical design of translocating motor proteins.
Hwang, Wonmuk; Lang, Matthew J
2009-01-01
Translocating motors generate force and move along a biofilament track to achieve diverse functions including gene transcription, translation, intracellular cargo transport, protein degradation, and muscle contraction. Advances in single molecule manipulation experiments, structural biology, and computational analysis are making it possible to consider common mechanical design principles of these diverse families of motors. Here, we propose a mechanical parts list that include track, energy conversion machinery, and moving parts. Energy is supplied not just by burning of a fuel molecule, but there are other sources or sinks of free energy, by binding and release of a fuel or products, or similarly between the motor and the track. Dynamic conformational changes of the motor domain can be regarded as controlling the flow of free energy to and from the surrounding heat reservoir. Multiple motor domains are organized in distinct ways to achieve motility under imposed physical constraints. Transcending amino acid sequence and structure, physically and functionally similar mechanical parts may have evolved as nature's design strategy for these molecular engines.
Mechanical Design of Translocating Motor Proteins
Lang, Matthew J.
2013-01-01
Translocating motors generate force and move along a biofilament track to achieve diverse functions including gene transcription, translation, intracellular cargo transport, protein degradation, and muscle contraction. Advances in single molecule manipulation experiments, structural biology, and computational analysis are making it possible to consider common mechanical design principles of these diverse families of motors. Here, we propose a mechanical parts list that include track, energy conversion machinery, and moving parts. Energy is supplied not just by burning of a fuel molecule, but there are other sources or sinks of free energy, by binding and release of a fuel or products, or similarly between the motor and the track. Dynamic conformational changes of the motor domain can be regarded as controlling the flow of free energy to and from the surrounding heat reservoir. Multiple motor domains are organized in distinct ways to achieve motility under imposed physical constraints. Transcending amino acid sequence and structure, physically and functionally similar mechanical parts may have evolved as nature’s design strategy for these molecular engines. PMID:19452133
Well-tempered metadynamics: a smoothly-converging and tunable free-energy method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-03-01
We present [1] a method for determining the free energy dependence on a selected number of order parameters using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevantregions of the order parameter space. The algorithm is tested on the reconstruction of alanine dipeptide free energy landscape. [1] A. Barducci, G. Bussi and M. Parrinello, Phys. Rev. Lett., accepted (2007).
Impact of detector simulation in particle physics collider experiments
NASA Astrophysics Data System (ADS)
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
Energy and helicity of magnetic torus knots and braids
NASA Astrophysics Data System (ADS)
Oberti, Chiara; Ricca, Renzo L.
2018-02-01
By considering steady magnetic fields in the shape of torus knots and unknots in ideal magnetohydrodynamics, we compute some fundamental geometric and physical properties to provide estimates for magnetic energy and helicity. By making use of an appropriate parametrization, we show that knots with dominant toroidal coils that are a good model for solar coronal loops have negligible total torsion contribution to magnetic helicity while writhing number provides a good proxy. Hence, by the algebraic definition of writhe based on crossing numbers, we show that the estimated values of writhe based on image analysis provide reliable information for the exact values of helicity. We also show that magnetic energy is linearly related to helicity, and the effect of the confinement of magnetic field can be expressed in terms of geometric information. These results can find useful application in solar and plasma physics, where braided structures are often present.
Department of Energy - Office of Science Early Career Research Program
NASA Astrophysics Data System (ADS)
Horwitz, James
The Department of Energy (DOE) Office of Science Early Career Program began in FY 2010. The program objectives are to support the development of individual research programs of outstanding scientists early in their careers and to stimulate research careers in the disciplines supported by the DOE Office of Science. Both university and DOE national laboratory early career scientists are eligible. Applicants must be within 10 years of receiving their PhD. For universities, the PI must be an untenured Assistant Professor or Associate Professor on the tenure track. DOE laboratory applicants must be full time, non-postdoctoral employee. University awards are at least 150,000 per year for 5 years for summer salary and expenses. DOE laboratory awards are at least 500,000 per year for 5 years for full annual salary and expenses. The Program is managed by the Office of the Deputy Director for Science Programs and supports research in the following Offices: Advanced Scientific and Computing Research, Biological and Environmental Research, Basic Energy Sciences, Fusion Energy Sciences, High Energy Physics, and Nuclear Physics. A new Funding Opportunity Announcement is issued each year with detailed description on the topical areas encouraged for early career proposals. Preproposals are required. This talk will introduce the DOE Office of Science Early Career Research program and describe opportunities for research relevant to the condensed matter physics community. http://science.energy.gov/early-career/
Assessment of physical activity of the human body considering the thermodynamic system.
Hochstein, Stefan; Rauschenberger, Philipp; Weigand, Bernhard; Siebert, Tobias; Schmitt, Syn; Schlicht, Wolfgang; Převorovská, Světlana; Maršík, František
2016-01-01
Correctly dosed physical activity is the basis of a vital and healthy life, but the measurement of physical activity is certainly rather empirical resulting in limited individual and custom activity recommendations. Certainly, very accurate three-dimensional models of the cardiovascular system exist, however, requiring the numeric solution of the Navier-Stokes equations of the flow in blood vessels. These models are suitable for the research of cardiac diseases, but computationally very expensive. Direct measurements are expensive and often not applicable outside laboratories. This paper offers a new approach to assess physical activity using thermodynamical systems and its leading quantity of entropy production which is a compromise between computation time and precise prediction of pressure, volume, and flow variables in blood vessels. Based on a simplified (one-dimensional) model of the cardiovascular system of the human body, we develop and evaluate a setup calculating entropy production of the heart to determine the intensity of human physical activity in a more precise way than previous parameters, e.g. frequently used energy considerations. The knowledge resulting from the precise real-time physical activity provides the basis for an intelligent human-technology interaction allowing to steadily adjust the degree of physical activity according to the actual individual performance level and thus to improve training and activity recommendations.
Online production validation in a HEP environment
NASA Astrophysics Data System (ADS)
Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.
2017-03-01
In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.
Graphics Processing Units for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.
2016-07-01
General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.
Funnel metadynamics as accurate binding free-energy method
Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele
2013-01-01
A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839
NASA Astrophysics Data System (ADS)
Matzel, E.; Mellors, R. J.; Magana-Zook, S. A.
2016-12-01
Seismic interferometry is based on the observation that the Earth's background wavefield includes coherent energy, which can be recovered by observing over long time periods, allowing the incoherent energy to cancel out. The cross correlation of the energy recorded at a pair of stations results in an estimate of the Green's Function (GF) and is equivalent to the record of a simple source located at one of the stations as recorded by the other. This allows high resolution imagery beneath dense seismic networks even in areas of low seismicity. The power of these inter-station techniques increases rapidly as the number of seismometers in a network increases. For large networks the number of correlations computed can run into the millions and this becomes a "big-data" problem where data-management dominates the efficiency of the computations. In this study, we use several methods of seismic interferometry to obtain highly detailed images at the site of the Source Physics Experiment (SPE). The objective of SPE is to obtain a physics-based understanding of how seismic waves are created at and scattered near the source. In 2015, a temporary deployment of 1,000 closely spaced geophones was added to the main network of instruments at the site. We focus on three interferometric techniques: Shot interferometry (SI) uses the SPE shots as rich sources of high frequency, high signal energy. Coda interferometry (CI) isolates the energy from the scattered wavefield of distant earthquakes. Ambient noise correlation (ANC) uses the energy of the ambient background field. In each case, the data recorded at one seismometer are correlated with the data recorded at another to obtain an estimate of the GF between the two. The large network of mixed geophone and broadband instruments at the SPE allows us to calculate over 500,000 GFs, which we use to characterize the site and measure the localized wavefield. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
Improving Design Efficiency for Large-Scale Heterogeneous Circuits
NASA Astrophysics Data System (ADS)
Gregerson, Anthony
Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.
Energy Frontier Research With ATLAS: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, John; Black, Kevin; Ahlen, Steve
2016-06-14
The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less
Bridging the Gap Between the iLEAPS and GEWEX Land-Surface Modeling Communities
NASA Technical Reports Server (NTRS)
Bonan, Gordon; Santanello, Joseph A., Jr.
2013-01-01
Models of Earth's weather and climate require fluxes of momentum, energy, and moisture across the land-atmosphere interface to solve the equations of atmospheric physics and dynamics. Just as atmospheric models can, and do, differ between weather and climate applications, mostly related to issues of scale, resolved or parameterised physics,and computational requirements, so too can the land models that provide the required surface fluxes differ between weather and climate models. Here, however, the issue is less one of scale-dependent parameterisations.Computational demands can influence other minor land model differences, especially with respect to initialisation, data assimilation, and forecast skill. However, the distinction among land models (and their development and application) is largely driven by the different science and research needs of the weather and climate communities.
Hu, Yu-Chen
2018-01-01
The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%. PMID:29702607
Transformational electronics: a powerful way to revolutionize our information world
NASA Astrophysics Data System (ADS)
Rojas, Jhonathan P.; Torres Sevilla, Galo A.; Ghoneim, Mohamed T.; Hussain, Aftab M.; Ahmed, Sally M.; Nassar, Joanna M.; Bahabry, Rabab R.; Nour, Maha; Kutbee, Arwa T.; Byas, Ernesto; Al-Saif, Bidoor; Alamri, Amal M.; Hussain, Muhammad M.
2014-06-01
With the emergence of cloud computation, we are facing the rising waves of big data. It is our time to leverage such opportunity by increasing data usage both by man and machine. We need ultra-mobile computation with high data processing speed, ultra-large memory, energy efficiency and multi-functionality. Additionally, we have to deploy energy-efficient multi-functional 3D ICs for robust cyber-physical system establishment. To achieve such lofty goals we have to mimic human brain, which is inarguably the world's most powerful and energy efficient computer. Brain's cortex has folded architecture to increase surface area in an ultra-compact space to contain its neuron and synapses. Therefore, it is imperative to overcome two integration challenges: (i) finding out a low-cost 3D IC fabrication process and (ii) foldable substrates creation with ultra-large-scale-integration of high performance energy efficient electronics. Hence, we show a low-cost generic batch process based on trench-protect-peel-recycle to fabricate rigid and flexible 3D ICs as well as high performance flexible electronics. As of today we have made every single component to make a fully flexible computer including non-planar state-of-the-art FinFETs. Additionally we have demonstrated various solid-state memory, movable MEMS devices, energy harvesting and storage components. To show the versatility of our process, we have extended our process towards other inorganic semiconductor substrates such as silicon germanium and III-V materials. Finally, we report first ever fully flexible programmable silicon based microprocessor towards foldable brain computation and wirelessly programmable stretchable and flexible thermal patch for pain management for smart bionics.
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-01-01
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems. PMID:27548171
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
When does a physical system compute?
Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv
2014-09-08
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.
When does a physical system compute?
Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv
2014-01-01
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245
A convex optimization method for self-organization in dynamic (FSO/RF) wireless networks
NASA Astrophysics Data System (ADS)
Llorca, Jaime; Davis, Christopher C.; Milner, Stuart D.
2008-08-01
Next generation communication networks are becoming increasingly complex systems. Previously, we presented a novel physics-based approach to model dynamic wireless networks as physical systems which react to local forces exerted on network nodes. We showed that under clear atmospheric conditions the network communication energy can be modeled as the potential energy of an analogous spring system and presented a distributed mobility control algorithm where nodes react to local forces driving the network to energy minimizing configurations. This paper extends our previous work by including the effects of atmospheric attenuation and transmitted power constraints in the optimization problem. We show how our new formulation still results in a convex energy minimization problem. Accordingly, an updated force-driven mobility control algorithm is presented. Forces on mobile backbone nodes are computed as the negative gradient of the new energy function. Results show how in the presence of atmospheric obscuration stronger forces are exerted on network nodes that make them move closer to each other, avoiding loss of connectivity. We show results in terms of network coverage and backbone connectivity and compare the developed algorithms for different scenarios.
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko
2013-06-18
Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.
Report on the solar physics-plasma physics workshop
NASA Technical Reports Server (NTRS)
Sturrock, P. A.; Baum, P. J.; Beckers, J. M.; Newman, C. E.; Priest, E. R.; Rosenberg, H.; Smith, D. F.; Wentzel, D. G.
1976-01-01
The paper summarizes discussions held between solar physicists and plasma physicists on the interface between solar and plasma physics, with emphasis placed on the question of what laboratory experiments, or computer experiments, could be pursued to test proposed mechanisms involved in solar phenomena. Major areas discussed include nonthermal plasma on the sun, spectroscopic data needed in solar plasma diagnostics, types of magnetic field structures in the sun's atmosphere, the possibility of MHD phenomena involved in solar eruptive phenomena, the role of non-MHD instabilities in energy release in solar flares, particle acceleration in solar flares, shock waves in the sun's atmosphere, and mechanisms of radio emission from the sun.
Is the local linearity of space-time inherited from the linearity of probabilities?
NASA Astrophysics Data System (ADS)
Müller, Markus P.; Carrozza, Sylvain; Höhn, Philipp A.
2017-02-01
The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics.
Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code
NASA Astrophysics Data System (ADS)
Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.
2015-08-01
MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.
Limits on fundamental limits to computation.
Markov, Igor L
2014-08-14
An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.
Efficient grid-based techniques for density functional theory
NASA Astrophysics Data System (ADS)
Rodriguez-Hernandez, Juan Ignacio
Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.
HEP Software Foundation Community White Paper Working Group - Detector Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolakis, J.
A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main componentsmore » of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grulke, Eric; Stencel, John
2011-09-13
The KY DOE EPSCoR Program supports two research clusters. The Materials Cluster uses unique equipment and computational methods that involve research expertise at the University of Kentucky and University of Louisville. This team determines the physical, chemical and mechanical properties of nanostructured materials and examines the dominant mechanisms involved in the formation of new self-assembled nanostructures. State-of-the-art parallel computational methods and algorithms are used to overcome current limitations of processing that otherwise are restricted to small system sizes and short times. The team also focuses on developing and applying advanced microtechnology fabrication techniques and the application of microelectrornechanical systems (MEMS)more » for creating new materials, novel microdevices, and integrated microsensors. The second research cluster concentrates on High Energy and Nuclear Physics. lt connects research and educational activities at the University of Kentucky, Eastern Kentucky University and national DOE research laboratories. Its vision is to establish world-class research status dedicated to experimental and theoretical investigations in strong interaction physics. The research provides a forum, facilities, and support for scientists to interact and collaborate in subatomic physics research. The program enables increased student involvement in fundamental physics research through the establishment of graduate fellowships and collaborative work.« less
Nonequilibrium radiative hypersonic flow simulation
NASA Astrophysics Data System (ADS)
Shang, J. S.; Surzhikov, S. T.
2012-08-01
Nearly all the required scientific disciplines for computational hypersonic flow simulation have been developed on the framework of gas kinetic theory. However when high-temperature physical phenomena occur beneath the molecular and atomic scales, the knowledge of quantum physics and quantum chemical-physics becomes essential. Therefore the most challenging topics in computational simulation probably can be identified as the chemical-physical models for a high-temperature gaseous medium. The thermal radiation is also associated with quantum transitions of molecular and electronic states. The radiative energy exchange is characterized by the mechanisms of emission, absorption, and scattering. In developing a simulation capability for nonequilibrium radiation, an efficient numerical procedure is equally important both for solving the radiative transfer equation and for generating the required optical data via the ab-initio approach. In computational simulation, the initial values and boundary conditions are paramount for physical fidelity. Precise information at the material interface of ablating environment requires more than just a balance of the fluxes across the interface but must also consider the boundary deformation. The foundation of this theoretic development shall be built on the eigenvalue structure of the governing equations which can be described by Reynolds' transport theorem. Recent innovations for possible aerospace vehicle performance enhancement via an electromagnetic effect appear to be very attractive. The effectiveness of this mechanism is dependent strongly on the degree of ionization of the flow medium, the consecutive interactions of fluid dynamics and electrodynamics, as well as an externally applied magnetic field. Some verified research results in this area will be highlighted. An assessment of all these most recent advancements in nonequilibrium modeling of chemical kinetics, chemical-physics kinetics, ablation, radiative exchange, computational algorithms, and the aerodynamic-electromagnetic interaction are summarized and delineated. The critical basic research areas for physic-based hypersonic flow simulation should become self-evident through the present discussion. Nevertheless intensive basic research efforts must be sustained in these areas for fundamental knowledge and future technology advancement.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
Integration of Openstack cloud resources in BES III computing cluster
NASA Astrophysics Data System (ADS)
Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan
2017-10-01
Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thayer, K.J.
The past year has seen several of the Physics Division`s new research projects reach major milestones with first successful experiments and results: the atomic physics station in the Basic Energy Sciences Research Center at the Argonne Advanced Photon Source was used in first high-energy, high-brilliance x-ray studies in atomic and molecular physics; the Short Orbit Spectrometer in Hall C at the Thomas Jefferson National Accelerator (TJNAF) Facility that the Argonne medium energy nuclear physics group was responsible for, was used extensively in the first round of experiments at TJNAF; at ATLAS, several new beams of radioactive isotopes were developed andmore » used in studies of nuclear physics and nuclear astrophysics; the new ECR ion source at ATLAS was completed and first commissioning tests indicate excellent performance characteristics; Quantum Monte Carlo calculations of mass-8 nuclei were performed for the first time with realistic nucleon-nucleon interactions using state-of-the-art computers, including Argonne`s massively parallel IBM SP. At the same time other future projects are well under way: preparations for the move of Gammasphere to ATLAS in September 1997 have progressed as planned. These new efforts are imbedded in, or flowing from, the vibrant ongoing research program described in some detail in this report: nuclear structure and reactions with heavy ions; measurements of reactions of astrophysical interest; studies of nucleon and sub-nucleon structures using leptonic probes at intermediate and high energies; atomic and molecular structure with high-energy x-rays. The experimental efforts are being complemented with efforts in theory, from QCD to nucleon-meson systems to structure and reactions of nuclei. Finally, the operation of ATLAS as a national users facility has achieved a new milestone, with 5,800 hours beam on target for experiments during the past fiscal year.« less
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
Atmospheric energetics as related to cyclogenesis over the eastern United States. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
West, P. W.
1973-01-01
A method is presented to investigate the atmospheric energy budget as related to cyclogenesis. Energy budget equations are developed that are shown to be advantageous because the individual terms represent basic physical processes which produce changes in atmospheric energy, and the equations provide a means to study the interaction of the cyclone with the larger scales of motion. The work presented represents an extension of previous studies because all of the terms of the energy budget equations were evaluated throughout the development period of the cyclone. Computations are carried out over a limited atmospheric volume which encompasses the cyclone, and boundary fluxes of energy that were ignored in most previous studies are evaluated. Two examples of cyclogenesis over the eastern United States were chosen for study. One of the cases (1-4 November, 1966) represented an example of vigorous development, while the development in the other case (5-8 December, 1969) was more modest. Objectively analyzed data were used in the evaluation of the energy budget terms in order to minimize computational errors, and an objective analysis scheme is described that insures that all of the resolution contained in the rawinsonde observations is incorporated in the analyses.
Satellite freeze forecast system: Executive summary
NASA Technical Reports Server (NTRS)
Martsolf, J. D. (Principal Investigator)
1983-01-01
A satellite-based temperature monitoring and prediction system consisting of a computer controlled acquisition, processing, and display system and the ten automated weather stations called by that computer was developed and transferred to the national weather service. This satellite freeze forecasting system (SFFS) acquires satellite data from either one of two sources, surface data from 10 sites, displays the observed data in the form of color-coded thermal maps and in tables of automated weather station temperatures, computes predicted thermal maps when requested and displays such maps either automatically or manually, archives the data acquired, and makes comparisons with historical data. Except for the last function, SFFS handles these tasks in a highly automated fashion if the user so directs. The predicted thermal maps are the result of two models, one a physical energy budget of the soil and atmosphere interface and the other a statistical relationship between the sites at which the physical model predicts temperatures and each of the pixels of the satellite thermal map.
Energy and daylighting: A correlation between quality of light and energy consciousness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krug, N.
1997-12-31
Energy and Daylighting, an advanced topics graduate/professional elective has been established to help the student develop a deeper understanding of Architectural Daylighting, Energy Conserving Design, and Material/Construction/Methods through direct application. After a brief survey of the principles and applications of current and developing attitudes and techniques in energy conservation and natural lighting strategies is conducted (in order to build upon previous courses), an extensive exercise follows which allows the student the opportunity for direct applications. Both computer modeling/analysis and physical modeling (light box simulation with photographic documentation) are employed to focus attention on the interrelationships between natural lighting and passivemore » energy conserving design--all within the context of establishing environmental (interior) quality and (exterior) design direction. As a result, students broaden their understanding of natural light and energy conservation as design tools; the importance of environmental responsibility, both built and natural environments; and using computer analysis as a design tool. This presentation centers around the activities and results obtained from explorations into Energy and Daylighting. Discussion will highlight the course objectives, the methodology involved in the studies, specific requirements and means of evaluation, a slide show of befores and afters (results), and a retrospective look at the course`s value, as well as future directions and implications.« less
Truncation-based energy weighting string method for efficiently resolving small energy barriers
NASA Astrophysics Data System (ADS)
Carilli, Michael F.; Delaney, Kris T.; Fredrickson, Glenn H.
2015-08-01
The string method is a useful numerical technique for resolving minimum energy paths in rare-event barrier-crossing problems. However, when applied to systems with relatively small energy barriers, the string method becomes inconvenient since many images trace out physically uninteresting regions where the barrier has already been crossed and recrossing is unlikely. Energy weighting alleviates this difficulty to an extent, but typical implementations still require the string's endpoints to evolve to stable states that may be far from the barrier, and deciding upon a suitable energy weighting scheme can be an iterative process dependent on both the application and the number of images used. A second difficulty arises when treating nucleation problems: for later images along the string, the nucleus grows to fill the computational domain. These later images are unphysical due to confinement effects and must be discarded. In both cases, computational resources associated with unphysical or uninteresting images are wasted. We present a new energy weighting scheme that eliminates all of the above difficulties by actively truncating the string as it evolves and forcing all images, including the endpoints, to remain within and cover uniformly a desired barrier region. The calculation can proceed in one step without iterating on strategy, requiring only an estimate of an energy value below which images become uninteresting.
A Long-Term Model for the Curriculum of Training for an Electric-Power Specialist
ERIC Educational Resources Information Center
Venikov, V. A.
1978-01-01
Long-term planning for professional training of electric-power specialists in Russia will have to (1) recognize the need for specialists to adapt to unforeseen developments in the field, (2) include new mathematics, physics, and computer technology, and (3) be prepared for changes in methods of production and transformation of energy. (AV)
A Graphical Representation for the Fugacity of a Pure Substance
ERIC Educational Resources Information Center
Book, Neil L.; Sitton, Oliver C.
2010-01-01
The thermodynamic equations used to define and compute the fugacity of a pure substance are depicted as processes on a semi-logarithmic plot of pressure vs. molar Gibbs energy (PG diagram) with isotherms for the substance behaving as an ideal gas superimposed. The PG diagram clearly demonstrates the physical basis for the definitions and the…
DIRAC in Large Particle Physics Experiments
NASA Astrophysics Data System (ADS)
Stagni, F.; Tsaregorodtsev, A.; Arrabito, L.; Sailer, A.; Hara, T.; Zhang, X.; Consortium, DIRAC
2017-10-01
The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. A number of High Energy Physics and Astrophysics collaborations have adopted DIRAC as the base for their computing models. DIRAC was initially developed for the LHCb experiment at LHC, CERN. Later, the Belle II, BES III and CTA experiments as well as the linear collider detector collaborations started using DIRAC for their computing systems. Some of the experiments built their DIRAC-based systems from scratch, others migrated from previous solutions, ad-hoc or based on different middlewares. Adaptation of DIRAC for a particular experiment was enabled through the creation of extensions to meet their specific requirements. Each experiment has a heterogeneous set of computing and storage resources at their disposal that were aggregated through DIRAC into a coherent pool. Users from different experiments can interact with the system in different ways depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. In this contribution we will summarize the experience of using DIRAC in particle physics collaborations. The problems of migration to DIRAC from previous systems and their solutions will be presented. An overview of specific DIRAC extensions will be given. We hope that this review will be useful for experiments considering an update, or for those designing their computing models.
Allison, J.; Amako, K.; Apostolakis, J.; ...
2016-07-01
Geant4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. In conclusion, the adaptation of Geant4 to multithreading, advances in physics, detector modeling and visualization, extensions tomore » the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.« less
NASA Astrophysics Data System (ADS)
Gutowitz, Howard
1991-08-01
Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.
High Energy Density Physics and Exotic Acceleration Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowan, T.; /General Atomics, San Diego; Colby, E.
2005-09-27
The High Energy Density and Exotic Acceleration working group took as our goal to reach beyond the community of plasma accelerator research with its applications to high energy physics, to promote exchange with other disciplines which are challenged by related and demanding beam physics issues. The scope of the group was to cover particle acceleration and beam transport that, unlike other groups at AAC, are not mediated by plasmas or by electromagnetic structures. At this Workshop, we saw an impressive advancement from years past in the area of Vacuum Acceleration, for example with the LEAP experiment at Stanford. And wemore » saw an influx of exciting new beam physics topics involving particle propagation inside of solid-density plasmas or at extremely high charge density, particularly in the areas of laser acceleration of ions, and extreme beams for fusion energy research, including Heavy-ion Inertial Fusion beam physics. One example of the importance and extreme nature of beam physics in HED research is the requirement in the Fast Ignitor scheme of inertial fusion to heat a compressed DT fusion pellet to keV temperatures by injection of laser-driven electron or ion beams of giga-Amp current. Even in modest experiments presently being performed on the laser-acceleration of ions from solids, mega-amp currents of MeV electrons must be transported through solid foils, requiring almost complete return current neutralization, and giving rise to a wide variety of beam-plasma instabilities. As keynote talks our group promoted Ion Acceleration (plenary talk by A. MacKinnon), which historically has grown out of inertial fusion research, and HIF Accelerator Research (invited talk by A. Friedman), which will require impressive advancements in space-charge-limited ion beam physics and in understanding the generation and transport of neutralized ion beams. A unifying aspect of High Energy Density applications was the physics of particle beams inside of solids, which is proving to be a very important field for diverse applications such as muon cooling, fusion energy research, and ultra-bright particle and radiation generation with high intensity lasers. We had several talks on these and other subjects, and many joint sessions with the Computational group, the EM Structures group, and the Beam Generation group. We summarize our groups' work in the following categories: vacuum acceleration schemes; ion acceleration; particle transport in solids; and applications to high energy density phenomena.« less
Tesla: An application for real-time data analysis in High Energy Physics
NASA Astrophysics Data System (ADS)
Aaij, R.; Amato, S.; Anderlini, L.; Benson, S.; Cattaneo, M.; Clemencic, M.; Couturier, B.; Frank, M.; Gligorov, V. V.; Head, T.; Jones, C.; Komarov, I.; Lupton, O.; Matev, R.; Raven, G.; Sciascia, B.; Skwarnicki, T.; Spradlin, P.; Stahl, S.; Storaci, B.; Vesterinen, M.
2016-11-01
Upgrades to the LHCb computing infrastructure in the first long shutdown of the LHC have allowed for high quality decay information to be calculated by the software trigger making a separate offline event reconstruction unnecessary. Furthermore, the storage space of the triggered candidate is an order of magnitude smaller than the entire raw event that would otherwise need to be persisted. Tesla is an application designed to process the information calculated by the trigger, with the resulting output used to directly perform physics measurements.
Atomic Radiations in the Decay of Medical Radioisotopes: A Physics Perspective
Lee, B. Q.; Kibédi, T.; Stuchbery, A. E.; Robertson, K. A.
2012-01-01
Auger electrons emitted in nuclear decay offer a unique tool to treat cancer cells at the scale of a DNA molecule. Over the last forty years many aspects of this promising research goal have been explored, however it is still not in the phase of serious clinical trials. In this paper, we review the physical processes of Auger emission in nuclear decay and present a new model being developed to evaluate the energy spectrum of Auger electrons, and hence overcome the limitations of existing computations. PMID:22924061
Atomic radiations in the decay of medical radioisotopes: a physics perspective.
Lee, B Q; Kibédi, T; Stuchbery, A E; Robertson, K A
2012-01-01
Auger electrons emitted in nuclear decay offer a unique tool to treat cancer cells at the scale of a DNA molecule. Over the last forty years many aspects of this promising research goal have been explored, however it is still not in the phase of serious clinical trials. In this paper, we review the physical processes of Auger emission in nuclear decay and present a new model being developed to evaluate the energy spectrum of Auger electrons, and hence overcome the limitations of existing computations.
Price, Sarah Sally L
2009-01-20
The phenomenon of polymorphism, the ability of a molecule to adopt more than one crystal structure, is a well-established property of crystalline solids. The possible variations in physical properties between polymorphs make the reliable reproduction of a crystalline form essential for all research using organic materials, as well as quality control in manufacture. Thus, the last two decades have seen both an increase in interest in polymorphism and the availability of the computer power needed to make the computational prediction of organic crystal structures a practical possibility. In the past decade, researchers have made considerable improvements in the theoretical basis for calculating the sets of structures that are within the energy range of possible polymorphism, called crystal energy landscapes. It is common to find that a molecule has a wide variety of ways of packing with lattice energy within a few kilojoules per mole of the most stable structure. However, as we develop methods to search for and characterize "all" solid forms, it is also now usual for polymorphs and solvates to be found. Thus, the computed crystal energy landscape reflects and to an increasing extent "predicts" the emerging complexity of the solid state observed for many organic molecules. This Account will discuss the ways in which the calculation of the crystal energy landscape of a molecule can be used as a complementary technique to solid form screening for polymorphs. Current methods can predict the known crystal structure, even under "blind test" conditions, but such successes are generally restricted to those structures that are the most stable over a wide range of thermodynamic conditions. The other low-energy structures can be alternative polymorphs, which have sometimes been found in later experimental studies. Examining the computed structures reveals the various compromises between close packing, hydrogen bonding, and pi-pi stacking that can result in energetically feasible structures. Indeed, we have observed that systems with many almost equi-energetic structures that contain a common interchangeable motif correlate with a tendency to disorder and problems with control of the crystallization product. Thus, contrasting the computed crystal energy landscape with the known crystal structures of a given molecule provides a valuable complement to solid form screening, and the examination of the low-energy structures often leads to a rationalization of the forms found.
NASA Astrophysics Data System (ADS)
Laws, Priscilla W.
2004-05-01
The Workshop Physics Activity Guide is a set of student workbooks designed to serve as the foundation for a two-semester calculus-based introductory physics course. It consists of 28 units that interweave text materials with activities that include prediction, qualitative observation, explanation, equation derivation, mathematical modeling, quantitative experiments, and problem solving. Students use a powerful set of computer tools to record, display, and analyze data, as well as to develop mathematical models of physical phenomena. The design of many of the activities is based on the outcomes of physics education research. The Workshop Physics Activity Guide is supported by an Instructor's Website that: (1) describes the history and philosophy of the Workshop Physics Project; (2) provides advice on how to integrate the Guide into a variety of educational settings; (3) provides information on computer tools (hardware and software) and apparatus; and (4) includes suggested homework assignments for each unit. Log on to the Workshop Physics Project website at http://physics.dickinson.edu/ Workshop Physics is a component of the Physics Suite--a collection of materials created by a group of educational reformers known as the Activity Based Physics Group. The Physics Suite contains a broad array of curricular materials that are based on physics education research, including:
Advanced computations in plasma physics
NASA Astrophysics Data System (ADS)
Tang, W. M.
2002-05-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
Cook, Daniel L; Neal, Maxwell L; Bookstein, Fred L; Gennari, John H
2013-12-02
In prior work, we presented the Ontology of Physics for Biology (OPB) as a computational ontology for use in the annotation and representations of biophysical knowledge encoded in repositories of physics-based biosimulation models. We introduced OPB:Physical entity and OPB:Physical property classes that extend available spatiotemporal representations of physical entities and processes to explicitly represent the thermodynamics and dynamics of physiological processes. Our utilitarian, long-term aim is to develop computational tools for creating and querying formalized physiological knowledge for use by multiscale "physiome" projects such as the EU's Virtual Physiological Human (VPH) and NIH's Virtual Physiological Rat (VPR). Here we describe the OPB:Physical dependency taxonomy of classes that represent of the laws of classical physics that are the "rules" by which physical properties of physical entities change during occurrences of physical processes. For example, the fluid analog of Ohm's law (as for electric currents) is used to describe how a blood flow rate depends on a blood pressure gradient. Hooke's law (as in elastic deformations of springs) is used to describe how an increase in vascular volume increases blood pressure. We classify such dependencies according to the flow, transformation, and storage of thermodynamic energy that occurs during processes governed by the dependencies. We have developed the OPB and annotation methods to represent the meaning-the biophysical semantics-of the mathematical statements of physiological analysis and the biophysical content of models and datasets. Here we describe and discuss our approach to an ontological representation of physical laws (as dependencies) and properties as encoded for the mathematical analysis of biophysical processes.
Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA
Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.
2017-01-01
The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099
NASA Astrophysics Data System (ADS)
Arons, Jonathan
The research proposed addresses understanding of the origin of non-thermal energy in the Universe, a subject beginning with the discovery of Cosmic Rays and continues, including the study of relativistic compact objects - neutron stars and black holes. Observed Rotation Powered Pulsars (RPPs) have rotational energy loss implying they have TeraGauss magnetic fields and electric potentials as large as 40 PetaVolts. The rotational energy lost is reprocessed into particles which manifest themselves in high energy gamma ray photon emission (GeV to TeV). Observations of pulsars from the FERMI Gamma Ray Observatory, launched into orbit in 2008, have revealed 130 of these stars (and still counting), thus demonstrating the presence of efficient cosmic accelerators within the strongly magnetized regions surrounding the rotating neutron stars. Understanding the physics of these and other Cosmic Accelerators is a major goal of astrophysical research. A new model for particle acceleration in the current sheets separating the closed and open field line regions of pulsars' magnetospheres, and separating regions of opposite magnetization in the relativistic winds emerging from those magnetopsheres, will be developed. The currents established in recent global models of the magnetosphere will be used as input to a magnetic field aligned acceleration model that takes account of the current carrying particles' inertia, generalizing models of the terrestrial aurora to the relativistic regime. The results will be applied to the spectacular new results from the FERMI gamma ray observatory on gamma ray pulsars, to probe the physics of the generation of the relativistic wind that carries rotational energy away from the compact stars, illuminating the whole problem of how compact objects can energize their surroundings. The work to be performed if this proposal is funded involves extending and developing concepts from plasma physics on dissipation of magnetic energy in thin sheets of electric current that separate regions of differing magnetization into the domain of highly relativistic magnetic fields - those with energy density large compared to the rest mass energy of the charged particles - the plasma - caught in that field. The investigators will create theoretical and computational models of the magnetic dissipation - a form of viscous flow in the thin sheets of electric current that form in the magnetized regions around the rotating stars - using Particle in-Cell plasma simulations. These simulations use a large computer to solve the equations of motion of many charged particles - millions to billions in the research that will be pursued - to unravel the dissipation of those fields and the acceleration of beams of particles in the thin sheets. The results will be incorporated into macroscopic MHD models of the magnetic structures around the stars which determine the location and strength of the current sheets, so as to model and analyze the pulsed gamma ray emission seen from hundreds of Rotation Powered Pulsars. The computational models will be assisted by ``pencil and paper'' theoretical modeling designed to motivate and interpret the computer simulations, and connect them to the observations.
Lin, Yu-Hsiu; Hu, Yu-Chen
2018-04-27
The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
A world-wide databridge supported by a commercial cloud provider
NASA Astrophysics Data System (ADS)
Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio
2017-10-01
Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
A Statistician's View of Upcoming Grand Challenges
NASA Astrophysics Data System (ADS)
Meng, Xiao Li
2010-01-01
In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.
Strategy and gaps for modeling, simulation, and control of hybrid systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabiti, Cristian; Garcia, Humberto E.; Hovsapian, Rob
2015-04-01
The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers,more » and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled, dynamic energy systems requires multiple simulation tools, potentially developed in several programming languages and resolved on separate time scales. Whereas further investigation and development of hybrid concepts will provide a more complete understanding of the joint computational and physical modeling needs, this report highlights areas in which co-simulation capabilities are warranted. The current development status, quality assurance, availability and maintainability of simulation tools that are currently available for hybrid systems modeling is presented. Existing gaps in the modeling and simulation toolsets and development needs are subsequently discussed. This effort will feed into a broader Roadmap activity for designing, developing, and demonstrating hybrid energy systems.« less
NIMROD resistive magnetohydrodynamic simulations of spheromak physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooper, E. B.; Cohen, B. I.; McLean, H. S.
The physics of spheromak plasmas is addressed by time-dependent, three-dimensional, resistive magnetohydrodynamic simulations with the NIMROD code [C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)]. Included in some detail are the formation of a spheromak driven electrostatically by a coaxial plasma gun with a flux-conserver geometry and power systems that accurately model the sustained spheromak physics experiment [R. D. Wood et al., Nucl. Fusion 45, 1582 (2005)]. The controlled decay of the spheromak plasma over several milliseconds is also modeled as the programmable current and voltage relax, resulting in simulations of entire experimental pulses. Reconnection phenomena andmore » the effects of current profile evolution on the growth of symmetry-breaking toroidal modes are diagnosed; these in turn affect the quality of magnetic surfaces and the energy confinement. The sensitivity of the simulation results addresses variations in both physical and numerical parameters, including spatial resolution. There are significant points of agreement between the simulations and the observed experimental behavior, e.g., in the evolution of the magnetics and the sensitivity of the energy confinement to the presence of symmetry-breaking magnetic fluctuations.« less
Strong Effects of Vs30 Heterogeneity on Physics-Based Scenario Ground-Shaking Computations
NASA Astrophysics Data System (ADS)
Louie, J. N.; Pullammanappallil, S. K.
2014-12-01
Hazard mapping and building codes worldwide use the vertically time-averaged shear-wave velocity between the surface and 30 meters depth, Vs30, as one predictor of earthquake ground shaking. Intensive field campaigns a decade ago in Reno, Los Angeles, and Las Vegas measured urban Vs30 transects with 0.3-km spacing. The Clark County, Nevada, Parcel Map includes urban Las Vegas and comprises over 10,000 site measurements over 1500 km2, completed in 2010. All of these data demonstrate fractal spatial statistics, with a fractal dimension of 1.5-1.8 at scale lengths from 0.5 km to 50 km. Vs measurements in boreholes up to 400 m deep show very similar statistics at 1 m to 200 m lengths. When included in physics-based earthquake-scenario ground-shaking computations, the highly heterogeneous Vs30 maps exhibit unexpectedly strong influence. In sensitivity tests (image below), low-frequency computations at 0.1 Hz display amplifications (as well as de-amplifications) of 20% due solely to Vs30. In 0.5-1.0 Hz computations, the amplifications are a factor of two or more. At 0.5 Hz and higher frequencies the amplifications can be larger than what the 1-d Building Code equations would predict from the Vs30 variations. Vs30 heterogeneities at one location have strong influence on amplifications at other locations, stretching out in the predominant direction of wave propagation for that scenario. The sensitivity tests show that shaking and amplifications are highly scenario-dependent. Animations of computed ground motions and how they evolve with time suggest that the fractal Vs30 variance acts to trap wave energy and increases the duration of shaking. Validations of the computations against recorded ground motions, possible in Las Vegas Valley due to the measurements of the Clark County Parcel Map, show that ground motion levels and amplifications match, while recorded shaking has longer duration than computed shaking. Several mechanisms may explain the amplification and increased duration of shaking in the presence of heterogeneous spatial distributions of Vs: conservation of wave energy across velocity changes; geometric focusing of waves by low-velocity lenses; vertical resonance and trapping; horizontal resonance and trapping; and multiple conversion of P- to S-wave energy.
Huang, Xinchuan; Schwenke, David W; Lee, Timothy J
2011-01-28
In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.
WTO — a deterministic approach to 4-fermion physics
NASA Astrophysics Data System (ADS)
Passarino, Giampiero
1996-09-01
The program WTO, which is designed for computing cross sections and other relevant observables in the e+e- annihilation into four fermions, is described. The various quantities are computed over both a completely inclusive experimental set-up and a realistic one, i.e. with cuts on the final state energies, final state angles, scattering angles and final state invariant masses. Initial state QED corrections are included by means of the structure function approach while final state QCD corrections are applicable in their naive formulation. A gauge restoring mechanism is included according to the Fermion-Loop scheme. The program structure is highly modular and particular care has been devoted to computing efficiency and speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu
We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energymore » resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.« less
Walthouwer, Michel Jean Louis; Oenema, Anke; Lechner, Lilian; de Vries, Hein
2015-10-19
Web-based computer-tailored interventions often suffer from small effect sizes and high drop-out rates, particularly among people with a low level of education. Using videos as a delivery format can possibly improve the effects and attractiveness of these interventions The main aim of this study was to examine the effects of a video and text version of a Web-based computer-tailored obesity prevention intervention on dietary intake, physical activity, and body mass index (BMI) among Dutch adults. A second study aim was to examine differences in appreciation between the video and text version. The final study aim was to examine possible differences in intervention effects and appreciation per educational level. A three-armed randomized controlled trial was conducted with a baseline and 6 months follow-up measurement. The intervention consisted of six sessions, lasting about 15 minutes each. In the video version, the core tailored information was provided by means of videos. In the text version, the same tailored information was provided in text format. Outcome variables were self-reported and included BMI, physical activity, energy intake, and appreciation of the intervention. Multiple imputation was used to replace missing values. The effect analyses were carried out with multiple linear regression analyses and adjusted for confounders. The process evaluation data were analyzed with independent samples t tests. The baseline questionnaire was completed by 1419 participants and the 6 months follow-up measurement by 1015 participants (71.53%). No significant interaction effects of educational level were found on any of the outcome variables. Compared to the control condition, the video version resulted in lower BMI (B=-0.25, P=.049) and lower average daily energy intake from energy-dense food products (B=-175.58, P<.001), while the text version had an effect only on energy intake (B=-163.05, P=.001). No effects on physical activity were found. Moreover, the video version was rated significantly better than the text version on feelings of relatedness (P=.041), usefulness (P=.047), and grade given to the intervention (P=.018). The video version of the Web-based computer-tailored obesity prevention intervention was the most effective intervention and most appreciated. Future research needs to examine if the effects are maintained in the long term and how the intervention can be optimized. Netherlands Trial Register: NTR3501; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3501 (Archived by WebCite at http://www.webcitation.org/6cBKIMaW1).
A Combined Experimental and Computational Study on Selected Physical Properties of Aminosilicones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perry, RJ; Genovese, SE; Farnum, RL
2014-01-29
A number of physical properties of aminosilicones have been determined experimentally and predicted computationally. It was found that COSMO-RS predicted the densities of the materials under study to within about 4% of the experimentally determined values. Vapor pressure measurements were performed, and all of the aminosilicones of interest were found to be significantly less volatile than the benchmark MEA material. COSMO-RS was reasonably accurate for predicting the vapor pressures for aminosilicones that were thermally stable. The heat capacities of all aminosilicones tested were between 2.0 and 2.3 J/(g.degrees C); again substantially lower than a benchmark 30% aqueous MEA solution. Surfacemore » energies for the aminosilicones were found to be 23.3-28.3 dyne/cm and were accurately predicted using the parachor method.« less
C P -odd sector and θ dynamics in holographic QCD
NASA Astrophysics Data System (ADS)
Areán, Daniel; Iatrakis, Ioannis; Järvinen, Matti; Kiritsis, Elias
2017-07-01
The holographic model of V-QCD is used to analyze the physics of QCD in the Veneziano large-N limit. An unprecedented analysis of the C P -odd physics is performed going beyond the level of effective field theories. The structure of holographic saddle points at finite θ is determined, as well as its interplay with chiral symmetry breaking. Many observables (vacuum energy and higher-order susceptibilities, singlet and nonsinglet masses and mixings) are computed as functions of θ and the quark mass m . Wherever applicable the results are compared to those of chiral Lagrangians, finding agreement. In particular, we recover the Witten-Veneziano formula in the small x →0 limit, we compute the θ dependence of the pion mass, and we derive the hyperscaling relation for the topological susceptibility in the conformal window in terms of the quark mass.
Exploring the potential energy landscape over a large parameter-space
NASA Astrophysics Data System (ADS)
He, Yang-Hui; Mehta, Dhagash; Niemerg, Matthew; Rummel, Markus; Valeanu, Alexandru
2013-07-01
Solving large polynomial systems with coefficient parameters are ubiquitous and constitute an important class of problems. We demonstrate the computational power of two methods — a symbolic one called the Comprehensive Gröbner basis and a numerical one called coefficient-parameter polynomial continuation — applied to studying both potential energy landscapes and a variety of questions arising from geometry and phenomenology. Particular attention is paid to an example in flux compactification where important physical quantities such as the gravitino and moduli masses and the string coupling can be efficiently extracted.
NASA Technical Reports Server (NTRS)
Sturrock, Peter A.
1993-01-01
The aim of the research activity was to increase our understanding of solar activity through data analysis, theoretical analysis, and computer modeling. Because the research subjects were diverse and many researchers were supported by this grant, a select few key areas of research are described in detail. Areas of research include: (1) energy storage and force-free magnetic field; (2) energy release and particle acceleration; (3) radiation by nonthermal electrons; (4) coronal loops; (5) flare classification; (6) longitude distributions of flares; (7) periodicities detected in the solar activity; (8) coronal heating and related problems; and (9) plasma processes.
NASA Astrophysics Data System (ADS)
Pazzona, Federico G.; Pireddu, Giovanni; Gabrieli, Andrea; Pintus, Alberto M.; Demontis, Pierfranco
2018-05-01
We investigate the coarse-graining of host-guest systems under the perspective of the local distribution of pore occupancies, along with the physical meaning and actual computability of the coarse-interaction terms. We show that the widely accepted approach, in which the contributions to the free energy given by the molecules located in two neighboring pores are estimated through Monte Carlo simulations where the two pores are kept separated from the rest of the system, leads to inaccurate results at high sorbate densities. In the coarse-graining strategy that we propose, which is based on the Bethe-Peierls approximation, density-independent interaction terms are instead computed according to local effective potentials that take into account the correlations between the pore pair and its surroundings by means of mean-field correction terms without the need for simulating the pore pair separately. Use of the interaction parameters obtained this way allows the coarse-grained system to reproduce more closely the equilibrium properties of the original one. Results are shown for lattice-gases where the local free energy can be computed exactly and for a system of Lennard-Jones particles under the effect of a static confining field.
Fusion Simulation Project Workshop Report
NASA Astrophysics Data System (ADS)
Kritz, Arnold; Keyes, David
2009-03-01
The mission of the Fusion Simulation Project is to develop a predictive capability for the integrated modeling of magnetically confined plasmas. This FSP report adds to the previous activities that defined an approach to integrated modeling in magnetic fusion. These previous activities included a Fusion Energy Sciences Advisory Committee panel that was charged to study integrated simulation in 2002. The report of that panel [Journal of Fusion Energy 20, 135 (2001)] recommended the prompt initiation of a Fusion Simulation Project. In 2003, the Office of Fusion Energy Sciences formed a steering committee that developed a project vision, roadmap, and governance concepts [Journal of Fusion Energy 23, 1 (2004)]. The current FSP planning effort involved 46 physicists, applied mathematicians and computer scientists, from 21 institutions, formed into four panels and a coordinating committee. These panels were constituted to consider: Status of Physics Components, Required Computational and Applied Mathematics Tools, Integration and Management of Code Components, and Project Structure and Management. The ideas, reported here, are the products of these panels, working together over several months and culminating in a 3-day workshop in May 2007.
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.
2013-08-01
We propose a 1D analytical particle mover for the recent charge- and energy-conserving electrostatic particle-in-cell (PIC) algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036]. The approach computes particle orbits exactly for a given piece-wise linear electric field. The resulting PIC algorithm maintains the exact charge and energy conservation properties of the original algorithm, but with improved performance (both in efficiency and robustness against the number of particles and timestep). We demonstrate the advantageous properties of the scheme with a challenging multiscale numerical test case, the ion acoustic wave. Using the analytical mover as a reference, we demonstrate that the choice of error estimator in the Crank-Nicolson mover has significant impact on the overall performance of the implicit PIC algorithm. The generalization of the approach to the multi-dimensional case is outlined, based on a novel and simple charge conserving interpolation scheme.
A scaling procedure for the response of an isolated system with high modal overlap factor
NASA Astrophysics Data System (ADS)
De Rosa, S.; Franco, F.
2008-10-01
The paper deals with a numerical approach that reduces some physical sizes of the solution domain to compute the dynamic response of an isolated system: it has been named Asymptotical Scaled Modal Analysis (ASMA). The proposed numerical procedure alters the input data needed to obtain the classic modal responses to increase the frequency band of validity of the discrete or continuous coordinates model through the definition of a proper scaling coefficient. It is demonstrated that the computational cost remains acceptable while the frequency range of analysis increases. Moreover, with reference to the flexural vibrations of a rectangular plate, the paper discusses the ASMA vs. the statistical energy analysis and the energy distribution approach. Some insights are also given about the limits of the scaling coefficient. Finally it is shown that the linear dynamic response, predicted with the scaling procedure, has the same quality and characteristics of the statistical energy analysis, but it can be useful when the system cannot be solved appropriately by the standard Statistical Energy Analysis (SEA).
Accelerating the design of solar thermal fuel materials through high throughput simulations.
Liu, Yun; Grossman, Jeffrey C
2014-12-10
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.
Gapped two-body Hamiltonian for continuous-variable quantum computation.
Aolita, Leandro; Roncaglia, Augusto J; Ferraro, Alessandro; Acín, Antonio
2011-03-04
We introduce a family of Hamiltonian systems for measurement-based quantum computation with continuous variables. The Hamiltonians (i) are quadratic, and therefore two body, (ii) are of short range, (iii) are frustration-free, and (iv) possess a constant energy gap proportional to the squared inverse of the squeezing. Their ground states are the celebrated Gaussian graph states, which are universal resources for quantum computation in the limit of infinite squeezing. These Hamiltonians constitute the basic ingredient for the adiabatic preparation of graph states and thus open new venues for the physical realization of continuous-variable quantum computing beyond the standard optical approaches. We characterize the correlations in these systems at thermal equilibrium. In particular, we prove that the correlations across any multipartition are contained exactly in its boundary, automatically yielding a correlation area law.
Douillard, Jean-Marc; Salles, Fabrice; Henry, Marc; Malandrini, Harold; Clauss, Frédéric
2007-01-15
The surface energies of talc and chlorite is computed using a simple model, which uses the calculation of the electrostatic energy of the crystal. It is necessary to calculate the atomic charges. We have chosen to follow Henry's model of determination of partial charges using scales of electronegativity and hardness. The results are in correct agreement with a determination of the surface energy obtained from an analysis of the heat of immersion data. Both results indicate that the surface energy of talc is lower than the surface energy of chlorite, in agreement with observed behavior of wettability. The influence of Al and Fe on this phenomenon is discussed. Surface energy of this type of solids seems to depend more strongly on the geometry of the crystal than on the type of atoms pointing out of the surface; i.e., the surface energy depends more on the physics of the system than on its chemistry.
Publications of LASL research, 1972--1976
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petersen, L.
1977-04-01
This bibliography is a compilation of unclassified work done at the Los Alamos Scientific Laboratory and published during the years 1972 to 1976. Publications too late for inclusion in earlier compilations are also listed. Declassification of previously classified reports is considered to constitute publication. The bibliography includes LASL reports, journal articles, books, conference papers, papers published in congressional hearings, theses, patents, etc. The following subject areas are included: aerospace studies; analytical technology; astrophysics; atomic and molecular physics, equation of state, opacity; biology and medicine; chemical dynamics and kinetics; chemistry; cryogenics; crystallography; CTR and plasma physics; earth science and engineering; energymore » (nonnuclear); engineering and equipment; EPR, ESR, NMR studies; explosives and detonations; fission physics; health and safety; hydrodynamics and radiation transport; instruments; lasers; mathematics and computers; medium-energy physics; metallurgy and ceramics technology; neutronics and criticality studies; nuclear physics; nuclear safeguards; physics; reactor technology; solid state science; and miscellaneous (including Project Rover). (RWR)« less
Pedersen, Scott J; Cooley, Paul D; Mainsbridge, Casey
2014-01-01
Desk-based employees face multiple workplace health hazards such as insufficient physical activity and prolonged sitting. The objective of this study was to increase workday energy expenditure by interrupting prolonged occupational sitting time and introducing short-bursts of physical activity to employees' daily work habits. Over a 13-week period participants (n=17) in the intervention group were regularly exposed to a passive prompt delivered through their desktop computer that required them to stand up and engage in a short-burst of physical activity, while the control group (n=17) was not exposed to this intervention. Instead, the control group continued with their normal work routine. All participants completed a pre- and post- intervention survey to estimate workplace daily energy expenditure (calories). There was a significant 2 (Group) × 2 (Test) interaction, F (1, 32)=9.26, p < 0.05. The intervention group increased the calories expended during the workday from pre-test (M=866.29 ± 151.40) to post-test (M=1054.10 ± 393.24), whereas the control group decreased calories expended during the workday from pre-test (M=982.55 ± 315.66) to post-test (M=892.21 ± 255.36). An e-health intervention using a passive prompt was an effective mechanism for increasing employee work-related energy expenditure. Engaging employees in regular short-bursts of physical activity during the workday resulted in reduced sitting time, which may have long-term effects on the improvement of employee health.
Verloigne, Maïté; Van Lippevelde, Wendy; Bere, Elling; Manios, Yannis; Kovács, Éva; Grillenberger, Monika; Maes, Lea; Brug, Johannes; De Bourdeaudhuij, Ilse
2015-09-18
The aim was to investigate which individual and family environmental factors are related to television and computer time separately in 10- to-12-year-old children within and across five European countries (Belgium, Germany, Greece, Hungary, Norway). Data were used from the ENERGY-project. Children and one of their parents completed a questionnaire, including questions on screen time behaviours and related individual and family environmental factors. Family environmental factors included social, political, economic and physical environmental factors. Complete data were obtained from 2022 child-parent dyads (53.8 % girls, mean child age 11.2 ± 0.8 years; mean parental age 40.5 ± 5.1 years). To examine the association between individual and family environmental factors (i.e. independent variables) and television/computer time (i.e. dependent variables) in each country, multilevel regression analyses were performed using MLwiN 2.22, adjusting for children's sex and age. In all countries, children reported more television and/or computer time, if children and their parents thought that the maximum recommended level for watching television and/or using the computer was higher and if children had a higher preference for television watching and/or computer use and a lower self-efficacy to control television watching and/or computer use. Most physical and economic environmental variables were not significantly associated with television or computer time. Slightly more individual factors were related to children's computer time and more parental social environmental factors to children's television time. We also found different correlates across countries: parental co-participation in television watching was significantly positively associated with children's television time in all countries, except for Greece. A higher level of parental television and computer time was only associated with a higher level of children's television and computer time in Hungary. Having rules regarding children's television time was related to less television time in all countries, except for Belgium and Norway. Most evidence was found for an association between screen time and individual and parental social environmental factors, which means that future interventions aiming to reduce screen time should focus on children's individual beliefs and habits as well parental social factors. As we identified some different correlates for television and computer time and across countries, cross-European interventions could make small adaptations per specific screen time activity and lay different emphases per country.
Hearing the shape of the Ising model with a programmable superconducting-flux annealer.
Vinci, Walter; Markström, Klas; Boixo, Sergio; Roy, Aidan; Spedalieri, Federico M; Warburton, Paul A; Severini, Simone
2014-07-16
Two objects can be distinguished if they have different measurable properties. Thus, distinguishability depends on the Physics of the objects. In considering graphs, we revisit the Ising model as a framework to define physically meaningful spectral invariants. In this context, we introduce a family of refinements of the classical spectrum and consider the quantum partition function. We demonstrate that the energy spectrum of the quantum Ising Hamiltonian is a stronger invariant than the classical one without refinements. For the purpose of implementing the related physical systems, we perform experiments on a programmable annealer with superconducting flux technology. Departing from the paradigm of adiabatic computation, we take advantage of a noisy evolution of the device to generate statistics of low energy states. The graphs considered in the experiments have the same classical partition functions, but different quantum spectra. The data obtained from the annealer distinguish non-isomorphic graphs via information contained in the classical refinements of the functions but not via the differences in the quantum spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chuan S.; Shao, Xi
2016-06-14
The main objective of our work is to provide theoretical basis and modeling support for the design and experimental setup of compact laser proton accelerator to produce high quality proton beams tunable with energy from 50 to 250 MeV using short pulse sub-petawatt laser. We performed theoretical and computational studies of energy scaling and Raleigh--Taylor instability development in laser radiation pressure acceleration (RPA) and developed novel RPA-based schemes to remedy/suppress instabilities for high-quality quasimonoenergetic proton beam generation as we proposed. During the project period, we published nine peer-reviewed journal papers and made twenty conference presentations including six invited talks onmore » our work. The project supported one graduate student who received his PhD degree in physics in 2013 and supported two post-doctoral associates. We also mentored three high school students and one undergraduate student of physics major by inspiring their interests and having them involved in the project.« less
NASA Astrophysics Data System (ADS)
Hakim, Ammar; Shi, Eric; Juno, James; Bernard, Tess; Hammett, Greg
2017-10-01
For weakly collisional (or collisionless) plasmas, kinetic effects are required to capture the physics of micro-turbulence. We have implemented solvers for kinetic and gyrokinetic equations in the computational plasma physics framework, Gkeyll. We use a version of discontinuous Galerkin scheme that conserves energy exactly. Plasma sheaths are modeled with novel boundary conditions. Positivity of distribution functions is maintained via a reconstruction method, allowing robust simulations that continue to conserve energy even with positivity limiters. We have performed a large number of benchmarks, verifying the accuracy and robustness of our code. We demonstrate the application of our algorithm to two classes of problems (a) Vlasov-Maxwell simulations of turbulence in a magnetized plasma, applicable to space plasmas; (b) Gyrokinetic simulations of turbulence in open-field-line geometries, applicable to laboratory plasmas. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Dynamics of Nearshore Sand Bars and Infra-gravity Waves: The Optimal Theory Point of View
NASA Astrophysics Data System (ADS)
Bouchette, F.; Mohammadi, B.
2016-12-01
It is well known that the dynamics of near-shore sand bars are partly controlled by the features (location of nodes, amplitude, length, period) of the so-called infra-gravity waves. Reciprocally, changes in the location, size and shape of near-shore sand bars can control wave/wave interactions which in their turn alter the infra-gravity content of the near-shore wave energy spectrum. The coupling infra-gravity / near-shore bar is thus definitely two ways. Regarding numerical modelling, several approaches have already been considered to analyze such coupled dynamics. Most of them are based on the following strategy: 1) define an energy spectrum including infra-gravity, 2) tentatively compute the radiation stresses driven by this energy spectrum, 3) compute sediment transport and changes in the seabottom elevation including sand bars, 4) loop on the computation of infra-gravity taking into account the morphological changes. In this work, we consider an alternative approach named Nearshore Optimal Theory, which is a kind of breakdown point of view for the modeling of near-shore hydro-morphodynamics and wave/ wave/ seabottom interactions. Optimal theory applied to near-shore hydro-morphodynamics arose with the design of solid coastal defense structures by shape optimization methods, and is being now extended in order to model dynamics of any near-shore system combining waves and sand. The basics are the following: the near-shore system state is through a functional J representative of the energy of the system in some way. This J is computed from a model embedding the physics to be studied only (here hydrodynamics forced by simple infra-gravity). Then the paradigm is to say that the system will evolve so that the energy J tends to minimize. No really matter the complexity of wave propagation nor wave/bottom interactions. As soon as J embeds the physics to be explored, the method does not require a comprehensive modeling. Near-shore Optimal Theory has already given promising results for the generation of near-shore sand bar from scratch and their growth when forced by fair-weather waves. Here, we use it to explore the coupling between a very simple infra-gravity content and the nucleation of near-shore sand-bars. It is shown that even a very poor infra-gravity content strongly improves the generation of sand bars.
Geothermal-energy files in computer storage: sites, cities, and industries
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Dea, P.L.
1981-12-01
The site, city, and industrial files are described. The data presented are from the hydrothermal site file containing about three thousand records which describe some of the principal physical features of hydrothermal resources in the United States. Data elements include: latitude, longitude, township, range, section, surface temperature, subsurface temperature, the field potential, and well depth for commercialization. (MHR)
Laboratory Directed Research and Development Annual Report for 2011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Pamela J.
2012-04-09
This report documents progress made on all LDRD-funded projects during fiscal year 2011. The following topics are discussed: (1) Advanced sensors and instrumentation; (2) Biological Sciences; (3) Chemistry; (4) Earth and space sciences; (5) Energy supply and use; (6) Engineering and manufacturing processes; (7) Materials science and technology; (8) Mathematics and computing sciences; (9) Nuclear science and engineering; and (10) Physics.
ERIC Educational Resources Information Center
Halpern, Arthur M.; Glendening, Eric D.
2013-01-01
A three-part project for students in physical chemistry, computational chemistry, or independent study is described in which they explore applications of valence bond (VB) and molecular orbital-configuration interaction (MO-CI) treatments of H[subscript 2]. Using a scientific spreadsheet, students construct potential-energy (PE) curves for several…
PREFACE: 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015)
NASA Astrophysics Data System (ADS)
Sakamoto, H.; Bonacorsi, D.; Ueda, I.; Lyon, A.
2015-12-01
The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences intended to attract physicists and computing professionals to discuss on recent developments and trends in software and computing for their research communities. Experts from the high energy and nuclear physics, computer science, and information technology communities attend CHEP events. This conference series provides an international forum to exchange experiences and the needs of a wide community, and to present and discuss recent, ongoing, and future activities. At the beginning of the successful series of CHEP conferences in 1985, the latest developments in embedded systems, networking, vector and parallel processing were presented in Amsterdam. The software and computing ecosystem massively evolved since then, and along this path each CHEP event has marked a step further. A vibrant community of experts on a wide range of different high-energy and nuclear physics experiments, as well as technology explorer and industry contacts, attend and discuss the present and future challenges, and shape the future of an entire community. In such a rapidly evolving area, aiming to capture the state-of-the-art on software and computing through a collection of proceedings papers on a journal is a big challenge. Due to the large attendance, the final papers appear on the journal a few months after the conference is over. Additionally, the contributions often report about studies at very heterogeneous statuses, namely studies that are completed, or are just started, or yet to be done. It is not uncommon that by the time a specific paper appears on the journal some of the work is over a year old, or the investigation actually happened in different directions and with different methodologies than originally presented at the conference just a few months before. And by the time the proceedings appear in journal form, new ideas and explorations have quickly formed, have already started, and presumably have also followed previously unpredictable directions. In this scenario, it is normal and healthy for the entire community to question itself as of whether it is a set of proceedings the best way to document and communicate to peers (present and future) the work that has been done at a precise time and the vivid and live ideas of a precise moment in the evolution of the discipline. Pointing the attention to a specific CHEP event alone does not give the right answer: in fact, the heritage value lies in the quality and continuity of the documentation work, despite the changes of times, trends and actors. The CHEP proceedings, in their variety and thanks to the condensed form of knowledge they offer, are what most likely will be more easily preserved for future generations, thanks to the outstanding efforts over digital libraries for all kinds of cultural heritage. Since 1985, this long-standing tradition continued with the 21st CHEP edition in Okinawa. The successful model that brings together high-energy and nuclear physicist and computer scientists was repeated in the Okinawa prefecture, an outstanding location consisting of a few dozen small islands in the southern half of the Nansei Shoto, the island chain which stretches over about one thousand kilometres from Kyushu to Taiwan. The OIST (Okinawa Institute of Science and Technology) centre hosted the event, and offered an outstanding location and efficient facilities for the event. As for the CHEP history, contributions from 'general purpose' physics experiments mixed together with highly specialized work on the frontier of precision and intensity. The year 2015 is marked by the LHC restart in Run 2. Experimental groups at the LHC reviewed and presented their Run 1 experiences in detail, and reported the work done in acquiring the latest computing and software technologies, as well as in evolving their computing models in preparation for Run 2 (and beyond). On the side of the intensity frontier, 2015 is also the start of Super-KEKB commissioning. Fixed-target experiments at CERN, Fermilab and J-PARC are growing bigger in size. In the field of nuclear physics, FAIR is under construction and RHIC well engaged into its Phase-II research program facing increased datasets and new challenges with precision physics. For the future, developments are progressing towards the construction of ILC. In all these projects, computing and software will be even more important than before. Beyond those examples, non-accelerator experiments reported on their search for novel computing models as their apparatus and operation become larger and more distributed. The CHEP edition in Okinawa explored the synergy of HEP experimental physicists and computer scientists with data engineers and data scientists even further. Many area of research are covered, and the techniques developed and adopted are presented in a richness and diversity never seen before. In numbers, CHEP 2015 attracted a very high number of oral and poster contribution, 535 in total, and hosted 450 participants from 28 countries. For the first time in the conference history, a system of 'keywords' at the abstracts submission time was set up and exploited to produce conference tracks depending on the topics covered in the proposed contributions. Authors were asked to select some 'application keywords' and/or 'technology keywords' to specify the content of their contribution. A bottom-up approach that was tried at CHEP 2015 in Okinawa for the first time in the history of this conference series, this encountered vast satisfaction both in the International Advisory Committee and among the conference attendees. This process created 8 topical tracks, well balanced in content, manageable in terms of number of contributions, and able to create the adequate discussion space for trend topics (e.g. cloud computing and virtualization). CHEP 2015 hosted contributions on online computing; offline software; data store and access; middleware, software development and tools, experiment frameworks, tools for distributed computing; computing activities and computing models; facilities, infrastructure, network; clouds and virtualization; performance increase and optimization exploiting hardware features. Throughout the entire process, we were blessed with a forward-looking group of competent colleagues in our International Advisory Committee, whom we warmly thank. All the individuals in the Program Committee team, who put together the technical tracks of the conference and reviewed all papers to prepare the sections of this proceedings journal, have to be credited for their outstanding work. And of course the gratitude goes to all people who submitted a contribution, presented it, and spent time to prepare a careful paper to document the work. These people, in the first place, are the main authors of the big success that CHEP continues to be. After almost 30 years, and 21 CHEP editions, this conference cycle continues to stay strong and to evolve in rapidly changing times towards a challenging future, covering new grounds and intercepting new trends as our field of research evolves. The next stop in this journey will be at the 22nd CHEP Conference on October 12th-14th, in San Francisco, hosted by SLAC and LBNL.
Recent advances in QM/MM free energy calculations using reference potentials☆
Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.
2015-01-01
Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480
Overcoming free energy barriers using unconstrained molecular dynamics simulations.
Hénin, Jérôme; Chipot, Christophe
2004-08-15
Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate xi is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of xi is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method. (c) 2004 American Institute of Physics.
Sasaya, Tenta; Sunaguchi, Naoki; Thet-Lwin, Thet-; Hyodo, Kazuyuki; Zeniya, Tsutomu; Takeda, Tohoru; Yuasa, Tetsuya
2017-01-01
We propose a pinhole-based fluorescent x-ray computed tomography (p-FXCT) system with a 2-D detector and volumetric beam that can suppress the quality deterioration caused by scatter components. In the corresponding p-FXCT technique, projections are acquired at individual incident energies just above and below the K-edge of the imaged trace element; then, reconstruction is performed based on the two sets of projections using a maximum likelihood expectation maximization algorithm that incorporates the scatter components. We constructed a p-FXCT imaging system and performed a preliminary experiment using a physical phantom and an I imaging agent. The proposed dual-energy p-FXCT improved the contrast-to-noise ratio by a factor of more than 2.5 compared to that attainable using mono-energetic p-FXCT for a 0.3 mg/ml I solution. We also imaged an excised rat’s liver infused with a Ba contrast agent to demonstrate the feasibility of imaging a biological sample. PMID:28272496
Energy Conservation and Conversion in NIMROD Computations of Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Maddox, J. A.; Sovinec, C. R.
2017-10-01
Previous work modeling magnetic relaxation during non-inductive start-up at the Pegasus spherical tokamak indicates an order of magnitude gap between measured experimental temperature and simulated temperature in NIMROD. Potential causes of the plasma temperature gap include: insufficient transport modeling, too low modeled injector power input, and numerical loss of energy, as energy is not algorithmically conserved in NIMROD simulations. Simple 2D nonlinear MHD simulations explore numerical energy conservation discrepancies in NIMROD because understanding numerical loss of energy is fundamental to addressing the physical problems of the other potential causes of energy loss. Evolution of these configurations induces magnetic reconnection, which transfers magnetic energy to heat and kinetic energy. The kinetic energy is eventually damped so, magnetic energy loss must correspond to an increase in internal energy. Results in the 2D geometries indicate that numerical energy loss during reconnection depends on the temporal resolution of the dynamics. Work support from U.S. Department of Energy through a subcontract from the Plasma Science and Innovation Center.
A Computational Study on the Ground and Excited States of Nickel Silicide.
Schoendorff, George; Morris, Alexis R; Hu, Emily D; Wilson, Angela K
2015-09-17
Nickel silicide has been studied with a range of computational methods to determine the nature of the Ni-Si bond. Additionally, the physical effects that need to be addressed within calculations to predict the equilibrium bond length and bond dissociation energy within experimental error have been determined. The ground state is predicted to be a (1)Σ(+) state with a bond order of 2.41 corresponding to a triple bond with weak π bonds. It is shown that calculation of the ground state equilibrium geometry requires a polarized basis set and treatment of dynamic correlation including up to triple excitations with CR-CCSD(T)L resulting in an equilibrium bond length of only 0.012 Å shorter than the experimental bond length. Previous calculations of the bond dissociation energy resulted in energies that were only 34.8% to 76.5% of the experimental bond dissociation energy. It is shown here that use of polarized basis sets, treatment of triple excitations, correlation of the valence and subvalence electrons, and a Λ coupled cluster approach is required to obtain a bond dissociation energy that deviates as little as 1% from experiment.
Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications
NASA Astrophysics Data System (ADS)
Blackburn, Megan Satterfield
2009-12-01
Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.
Optimization of design parameters of low-energy buildings
NASA Astrophysics Data System (ADS)
Vala, Jiří; Jarošová, Petra
2017-07-01
Evaluation of temperature development and related consumption of energy required for heating, air-conditioning, etc. in low-energy buildings requires the proper physical analysis, covering heat conduction, convection and radiation, including beam and diffusive components of solar radiation, on all building parts and interfaces. The system approach and the Fourier multiplicative decomposition together with the finite element technique offers the possibility of inexpensive and robust numerical and computational analysis of corresponding direct problems, as well as of the optimization ones with several design variables, using the Nelder-Mead simplex method. The practical example demonstrates the correlation between such numerical simulations and the time series of measurements of energy consumption on a small family house in Ostrov u Macochy (35 km northern from Brno).
Ultrarelativistic electromagnetic pulses in plasmas
NASA Technical Reports Server (NTRS)
Ashour-Abdalla, M.; Leboeuf, J. N.; Tajima, T.; Dawson, J. M.; Kennel, C. F.
1981-01-01
The physical processes of a linearly polarized electromagnetic pulse of highly relativistic amplitude in an underdense plasma accelerating particles to very high energies are studied through computer simulation. An electron-positron plasma is considered first. The maximum momenta achieved scale as the square of the wave amplitude. This acceleration stops when the bulk of the wave energy is converted to particle energy. The pulse leaves behind as a wake a vacuum region whose length scales as the amplitude of the wave. The results can be explained in terms of a snow plow or piston-like action of the radiation on the plasma. When a mass ratio other than unity is chosen and electrostatic effects begin to play a role, first the ion energy increases faster than the electron energy and then the electron energy catches up later, eventually reaching the same value.
Császár, Attila G; Furtenbacher, Tibor; Árendás, Péter
2016-11-17
Quantum mechanics builds large-scale graphs (networks): the vertices are the discrete energy levels the quantum system possesses, and the edges are the (quantum-mechanically allowed) transitions. Parts of the complete quantum mechanical networks can be probed experimentally via high-resolution, energy-resolved spectroscopic techniques. The complete rovibronic line list information for a given molecule can only be obtained through sophisticated quantum-chemical computations. Experiments as well as computations yield what we call spectroscopic networks (SN). First-principles SNs of even small, three to five atomic molecules can be huge, qualifying for the big data description. Besides helping to interpret high-resolution spectra, the network-theoretical view offers several ideas for improving the accuracy and robustness of the increasingly important information systems containing line-by-line spectroscopic data. For example, the smallest number of measurements necessary to perform to obtain the complete list of energy levels is given by the minimum-weight spanning tree of the SN and network clustering studies may call attention to "weakest links" of a spectroscopic database. A present-day application of spectroscopic networks is within the MARVEL (Measured Active Rotational-Vibrational Energy Levels) approach, whereby the transitions information on a measured SN is turned into experimental energy levels via a weighted linear least-squares refinement. MARVEL has been used successfully for 15 molecules and allowed to validate most of the transitions measured and come up with energy levels with well-defined and realistic uncertainties. Accurate knowledge of the energy levels with computed transition intensities allows the realistic prediction of spectra under many different circumstances, e.g., for widely different temperatures. Detailed knowledge of the energy level structure of a molecule coming from a MARVEL analysis is important for a considerable number of modeling efforts in chemistry, physics, and engineering.
A Machine LearningFramework to Forecast Wave Conditions
NASA Astrophysics Data System (ADS)
Zhang, Y.; James, S. C.; O'Donncha, F.
2017-12-01
Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.
NASA Astrophysics Data System (ADS)
Zabusky, Norman J.
2005-03-01
This paper is mostly a history of the early years of nonlinear and computational physics and mathematics. I trace how the counterintuitive result of near-recurrence to an initial condition in the first scientific digital computer simulation led to the discovery of the soliton in a later computer simulation. The 1955 report by Fermi, Pasta, and Ulam (FPU) described their simulation of a one-dimensional nonlinear lattice which did not show energy equipartition. The 1965 paper by Zabusky and Kruskalshowed that the Korteweg-de Vries (KdV) nonlinear partial differential equation, a long wavelength model of the α-lattice (or cubic nonlinearity), derived by Kruskal, gave quantitatively the same results obtained by FPU. In 1967, Zabusky and Deem showed that a localized short wavelength initial excitation (then called an "optical" and now a "zone-boundary mode" excitation ) of the α-lattice revealed "n-curve" coherent states. If the initial amplitude was sufficiently large energy equipartition followed in a short time. The work of Kruskal and Miura (KM), Gardner and Greene (GG), and myself led to the appreciation of the infinity of denumerable invariants (conservation laws) for Hamiltonian systems and to a procedure by GGKM in 1967 for solving KdV exactly. The nonlinear science field exponentiated in diversity of linkages (as described in Appendix A). Included were pure and applied mathematics and all branches of basic and applied physics, including the first nonhydrodynamic application to optical solitons, as described in a brief essay (Appendix B) by Hasegawa. The growth was also manifest in the number of meetings held and institutes founded, as described briefly in Appendix D. Physicists and mathematicians in Japan, USA, and USSR (in the latter two, people associated with plasma physics) contributed to the diversification of the nonlinear paradigm which continues worldwide to the present. The last part of the paper (and Appendix C) discuss visiometrics: the visualization and quantification of simulation data, e.g., projection to lower dimensions, to facilitate understanding of nonlinear phenomena for modeling and prediction (or design). Finally, I present some recent developments that are linked to my early work by: Dritschel (vortex dynamics via contour dynamics/surgery in two and three dimensions); Friedland (pattern formation by synchronization in Hamiltonian nonlinear wave, vortex, plasma, systems, etc.); and the author ("n-curve" states and energy equipartition in a FPU lattice).
Zabusky, Norman J
2005-03-01
This paper is mostly a history of the early years of nonlinear and computational physics and mathematics. I trace how the counterintuitive result of near-recurrence to an initial condition in the first scientific digital computer simulation led to the discovery of the soliton in a later computer simulation. The 1955 report by Fermi, Pasta, and Ulam (FPU) described their simulation of a one-dimensional nonlinear lattice which did not show energy equipartition. The 1965 paper by Zabusky and Kruskalshowed that the Korteweg-de Vries (KdV) nonlinear partial differential equation, a long wavelength model of the alpha-lattice (or cubic nonlinearity), derived by Kruskal, gave quantitatively the same results obtained by FPU. In 1967, Zabusky and Deem showed that a localized short wavelength initial excitation (then called an "optical" and now a "zone-boundary mode" excitation ) of the alpha-lattice revealed "n-curve" coherent states. If the initial amplitude was sufficiently large energy equipartition followed in a short time. The work of Kruskal and Miura (KM), Gardner and Greene (GG), and myself led to the appreciation of the infinity of denumerable invariants (conservation laws) for Hamiltonian systems and to a procedure by GGKM in 1967 for solving KdV exactly. The nonlinear science field exponentiated in diversity of linkages (as described in Appendix A). Included were pure and applied mathematics and all branches of basic and applied physics, including the first nonhydrodynamic application to optical solitons, as described in a brief essay (Appendix B) by Hasegawa. The growth was also manifest in the number of meetings held and institutes founded, as described briefly in Appendix D. Physicists and mathematicians in Japan, USA, and USSR (in the latter two, people associated with plasma physics) contributed to the diversification of the nonlinear paradigm which continues worldwide to the present. The last part of the paper (and Appendix C) discuss visiometrics: the visualization and quantification of simulation data, e.g., projection to lower dimensions, to facilitate understanding of nonlinear phenomena for modeling and prediction (or design). Finally, I present some recent developments that are linked to my early work by: Dritschel (vortex dynamics via contour dynamics/surgery in two and three dimensions); Friedland (pattern formation by synchronization in Hamiltonian nonlinear wave, vortex, plasma, systems, etc.); and the author ("n-curve" states and energy equipartition in a FPU lattice).
Computation of pH-Dependent Binding Free Energies
Kim, M. Olivia; McCammon, J. Andrew
2015-01-01
Protein-ligand binding accompanies changes in the surrounding electrostatic environments of the two binding partners and may lead to changes in protonation upon binding. In cases where the complex formation results in a net transfer of protons, the binding process is pH-dependent. However, conventional free energy computations or molecular docking protocols typically employ fixed protonation states for the titratable groups in both binding partners set a priori, which are identical for the free and bound states. In this review, we draw attention to these important yet largely ignored binding-induced protonation changes in protein-ligand association by outlining physical origins and prevalence of the protonation changes upon binding. Following a summary of various theoretical methods for pKa prediction, we discuss the theoretical framework to examine the pH dependence of protein-ligand binding processes. PMID:26202905
NASA Technical Reports Server (NTRS)
Aston, T. W.; Fabos, J. G.; Macdougall, E. B.
1982-01-01
Adaptation and derivation were used to develop a procedure for assessing the availability of renewable energy resources on the landscape while simultaneously accounting for the economic, legal, social, and environmental issues required. Done in a step-by-step fashion, the procedure can be used interactively at the computer terminals. Its application in determining the hydroelectricity, biomass, and windpower in a 40,000 acre study area of Western Massachusetts shows that: (1) three existing dam sites are physically capable of being retrofitted for hydropower; (2) each of three general areas has a mean annual windspeed exceeding 14 mph and is conductive to windpower; and (3) 20% of the total land area consists of prime agricultural biomass while 30% of the area is prime forest biomass land.
Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Keren
Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less
Olson, Mark A; Feig, Michael; Brooks, Charles L
2008-04-15
This article examines ab initio methods for the prediction of protein loops by a computational strategy of multiscale conformational sampling and physical energy scoring functions. Our approach consists of initial sampling of loop conformations from lattice-based low-resolution models followed by refinement using all-atom simulations. To allow enhanced conformational sampling, the replica exchange method was implemented. Physical energy functions based on CHARMM19 and CHARMM22 parameterizations with generalized Born (GB) solvent models were applied in scoring loop conformations extracted from the lattice simulations and, in the case of all-atom simulations, the ensemble of conformations were generated and scored with these models. Predictions are reported for 25 loop segments, each eight residues long and taken from a diverse set of 22 protein structures. We find that the simulations generally sampled conformations with low global root-mean-square-deviation (RMSD) for loop backbone coordinates from the known structures, whereas clustering conformations in RMSD space and scoring detected less favorable loop structures. Specifically, the lattice simulations sampled basins that exhibited an average global RMSD of 2.21 +/- 1.42 A, whereas clustering and scoring the loop conformations determined an RMSD of 3.72 +/- 1.91 A. Using CHARMM19/GB to refine the lattice conformations improved the sampling RMSD to 1.57 +/- 0.98 A and detection to 2.58 +/- 1.48 A. We found that further improvement could be gained from extending the upper temperature in the all-atom refinement from 400 to 800 K, where the results typically yield a reduction of approximately 1 A or greater in the RMSD of the detected loop. Overall, CHARMM19 with a simple pairwise GB solvent model is more efficient at sampling low-RMSD loop basins than CHARMM22 with a higher-resolution modified analytical GB model; however, the latter simulation method provides a more accurate description of the all-atom energy surface, yet demands a much greater computational cost. (c) 2007 Wiley Periodicals, Inc.
Advanced Computation in Plasma Physics
NASA Astrophysics Data System (ADS)
Tang, William
2001-10-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
2013-01-01
Background In prior work, we presented the Ontology of Physics for Biology (OPB) as a computational ontology for use in the annotation and representations of biophysical knowledge encoded in repositories of physics-based biosimulation models. We introduced OPB:Physical entity and OPB:Physical property classes that extend available spatiotemporal representations of physical entities and processes to explicitly represent the thermodynamics and dynamics of physiological processes. Our utilitarian, long-term aim is to develop computational tools for creating and querying formalized physiological knowledge for use by multiscale “physiome” projects such as the EU’s Virtual Physiological Human (VPH) and NIH’s Virtual Physiological Rat (VPR). Results Here we describe the OPB:Physical dependency taxonomy of classes that represent of the laws of classical physics that are the “rules” by which physical properties of physical entities change during occurrences of physical processes. For example, the fluid analog of Ohm’s law (as for electric currents) is used to describe how a blood flow rate depends on a blood pressure gradient. Hooke’s law (as in elastic deformations of springs) is used to describe how an increase in vascular volume increases blood pressure. We classify such dependencies according to the flow, transformation, and storage of thermodynamic energy that occurs during processes governed by the dependencies. Conclusions We have developed the OPB and annotation methods to represent the meaning—the biophysical semantics—of the mathematical statements of physiological analysis and the biophysical content of models and datasets. Here we describe and discuss our approach to an ontological representation of physical laws (as dependencies) and properties as encoded for the mathematical analysis of biophysical processes. PMID:24295137
NASA Astrophysics Data System (ADS)
Rahman, M. S.; Pota, H. R.; Mahmud, M. A.; Hossain, M. J.
2016-05-01
This paper presents the impact of large penetration of wind power on the transient stability through a dynamic evaluation of the critical clearing times (CCTs) by using intelligent agent-based approach. A decentralised multi-agent-based framework is developed, where agents represent a number of physical device models to form a complex infrastructure for computation and communication. They enable the dynamic flow of information and energy for the interaction between the physical processes and their activities. These agents dynamically adapt online measurements and use the CCT information for relay coordination to improve the transient stability of power systems. Simulations are carried out on a smart microgrid system for faults at increasing wind power penetration levels and the improvement in transient stability using the proposed agent-based framework is demonstrated.
iSEDfit: Bayesian spectral energy distribution modeling of galaxies
NASA Astrophysics Data System (ADS)
Moustakas, John
2017-08-01
iSEDfit uses Bayesian inference to extract the physical properties of galaxies from their observed broadband photometric spectral energy distribution (SED). In its default mode, the inputs to iSEDfit are the measured photometry (fluxes and corresponding inverse variances) and a measurement of the galaxy redshift. Alternatively, iSEDfit can be used to estimate photometric redshifts from the input photometry alone. After the priors have been specified, iSEDfit calculates the marginalized posterior probability distributions for the physical parameters of interest, including the stellar mass, star-formation rate, dust content, star formation history, and stellar metallicity. iSEDfit also optionally computes K-corrections and produces multiple "quality assurance" (QA) plots at each stage of the modeling procedure to aid in the interpretation of the prior parameter choices and subsequent fitting results. The software is distributed as part of the impro IDL suite.
NASA Astrophysics Data System (ADS)
Akhlaghi, Parisa; Miri Hakimabad, Hashem; Rafat Motavalli, Laleh
2015-07-01
This paper reports on the methodology applied to select suitable tissue equivalent materials of an 8-year phantom for use in computed tomography (CT) examinations. To find the appropriate tissue substitutes, first physical properties (physical density, electronic density, effective atomic number, mass attenuation coefficient and CT number) of different materials were studied. Results showed that, the physical properties of water and polyurethane (as soft tissue), B-100 and polyvinyl chloride (PVC) (as bone) and polyurethane foam (as lung) agree more with those of original tissues. Then in the next step, the absorbed doses in the location of 25 thermoluminescent dosimeters (TLDs) as well as dose distribution in one slice of phantom were calculated for original and these proposed materials by Monte Carlo simulation at different tube voltages. The comparisons suggested that at tube voltages of 80 and 100 kVp using B-100 as bone, water as soft tissue and polyurethane foam as lung is suitable for dosimetric study in pediatric CT examinations. In addition, it was concluded that by considering just the mass attenuation coefficient of different materials, the appropriate tissue equivalent substitutes in each desired X-ray energy range could be found.
High-Productivity Computing in Computational Physics Education
NASA Astrophysics Data System (ADS)
Tel-Zur, Guy
2011-03-01
We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Computational Physics for Space Flight Applications
NASA Technical Reports Server (NTRS)
Reed, Robert A.
2004-01-01
This paper presents viewgraphs on computational physics for space flight applications. The topics include: 1) Introduction to space radiation effects in microelectronics; 2) Using applied physics to help NASA meet mission objectives; 3) Example of applied computational physics; and 4) Future directions in applied computational physics.
Final Report: High Energy Physics at the Energy Frontier at Louisiana Tech
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawyer, Lee; Wobisch, Markus; Greenwood, Zeno D.
The Louisiana Tech University High Energy Physics group has developed a research program aimed at experimentally testing the Standard Model of particle physics and searching for new phenomena through a focused set of analyses in collaboration with the ATLAS experiment at the Large Hadron Collider (LHC) at the CERN laboratory in Geneva. This research program includes involvement in the current operation and maintenance of the ATLAS experiment and full involvement in Phase 1 and Phase 2 upgrades in preparation for future high luminosity (HL-LHC) operation of the LHC. Our focus is solely on the ATLAS experiment at the LHC, withmore » some related detector development and software efforts. We have established important service roles on ATLAS in five major areas: Triggers, especially jet triggers; Data Quality monitoring; grid computing; GPU applications for upgrades; and radiation testing for upgrades. Our physics research is focused on multijet measurements and top quark physics in final states containing tau leptons, which we propose to extend into related searches for new phenomena. Focusing on closely related topics in the jet and top analyses and coordinating these analyses in our group has led to high efficiency and increased visibility inside the ATLAS collaboration and beyond. Based on our work in the DØ experiment in Run II of the Fermilab Tevatron Collider, Louisiana Tech has developed a reputation as one of the leading institutions pursuing jet physics studies. Currently we are applying this expertise to the ATLAS experiment, with several multijet analyses in progress.« less
NASA Astrophysics Data System (ADS)
Ercan, İlke; Suyabatmaz, Enes
2018-06-01
The saturation in the efficiency and performance scaling of conventional electronic technologies brings about the development of novel computational paradigms. Brownian circuits are among the promising alternatives that can exploit fluctuations to increase the efficiency of information processing in nanocomputing. A Brownian cellular automaton, where signals propagate randomly and are driven by local transition rules, can be made computationally universal by embedding arbitrary asynchronous circuits on it. One of the potential realizations of such circuits is via single electron tunneling (SET) devices since SET technology enable simulation of noise and fluctuations in a fashion similar to Brownian search. In this paper, we perform a physical-information-theoretic analysis on the efficiency limitations in a Brownian NAND and half-adder circuits implemented using SET technology. The method we employed here establishes a solid ground that enables studying computational and physical features of this emerging technology on an equal footing, and yield fundamental lower bounds that provide valuable insights into how far its efficiency can be improved in principle. In order to provide a basis for comparison, we also analyze a NAND gate and half-adder circuit implemented in complementary metal oxide semiconductor technology to show how the fundamental bound of the Brownian circuit compares against a conventional paradigm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan; Piro, Markus H.A.
Thermochimica is a software library that determines a unique combination of phases and their compositions at thermochemical equilibrium. Thermochimica can be used for stand-alone calculations or it can be directly coupled to other codes. This release of the software does not have a graphical user interface (GUI) and it can be executed from the command line or from an Application Programming Interface (API). Also, it is not intended for thermodynamic model development or for constructing phase diagrams. The main purpose of the software is to be directly coupled with a multi-physics code to provide material properties and boundary conditions formore » various physical phenomena. Significant research efforts have been dedicated to enhance computational performance through advanced algorithm development, such as improved estimation techniques and non-linear solvers. Various useful parameters can be provided as output from Thermochimica, such as: determination of which phases are stable at equilibrium, the mass of solution species and phases at equilibrium, mole fractions of solution phase constituents, thermochemical activities (which are related to partial pressures for gaseous species), chemical potentials of solution species and phases, and integral Gibbs energy (referenced relative to standard state). The overall goal is to provide an open source computational tool to enhance the predictive capability of multi-physics codes without significantly impeding computational performance.« less
NASA Astrophysics Data System (ADS)
Robinson, Tyler D.; Crisp, David
2018-05-01
Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.
Sornborger, Andrew Tyler; Stancil, Phillip; Geller, Michael R.
2018-03-22
Here, one of the most promising applications of an error-corrected universal quantum computer is the efficient simulation of complex quantum systems such as large molecular systems. In this application, one is interested in both the electronic structure such as the ground state energy and dynamical properties such as the scattering cross section and chemical reaction rates. However, most theoretical work and experimental demonstrations have focused on the quantum computation of energies and energy surfaces. In this work, we attempt to make the prethreshold (not error-corrected) quantum simulation of dynamical properties practical as well. We show that the use of precomputedmore » potential energy surfaces and couplings enables the gate-based simulation of few-channel but otherwise realistic molecular collisions. Our approach is based on the widely used Born–Oppenheimer approximation for the structure problem coupled with a semiclassical method for the dynamics. In the latter the electrons are treated quantum mechanically but the nuclei are classical, which restricts the collisions to high energy or temperature (typically above ≈10 eV). By using operator splitting techniques optimized for the resulting time-dependent Hamiltonian simulation problem, we give several physically realistic collision examples, with 3–8 channels and circuit depths < 1000.« less
Kuś, Tomasz; Krylov, Anna I
2011-08-28
The charge-stabilization method is applied to double ionization potential equation-of-motion (EOM-DIP) calculations to stabilize unstable dianion reference functions. The auto-ionizing character of the dianionic reference states spoils the numeric performance of EOM-DIP limiting applications of this method. We demonstrate that reliable excitation energies can be computed by EOM-DIP using a stabilized resonance wave function instead of the lowest energy solution corresponding to the neutral + free electron(s) state of the system. The details of charge-stabilization procedure are discussed and illustrated by examples. The choice of optimal stabilizing Coulomb potential, which is strong enough to stabilize the dianion reference, yet, minimally perturbs the target states of the neutral, is the crux of the approach. Two algorithms of choosing optimal parameters of the stabilization potential are presented. One is based on the orbital energies, and another--on the basis set dependence of the total Hartree-Fock energy of the reference. Our benchmark calculations of the singlet-triplet energy gaps in several diradicals show a remarkable improvement of the EOM-DIP accuracy in problematic cases. Overall, the excitation energies in diradicals computed using the stabilized EOM-DIP are within 0.2 eV from the reference EOM spin-flip values. © 2011 American Institute of Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sornborger, Andrew Tyler; Stancil, Phillip; Geller, Michael R.
Here, one of the most promising applications of an error-corrected universal quantum computer is the efficient simulation of complex quantum systems such as large molecular systems. In this application, one is interested in both the electronic structure such as the ground state energy and dynamical properties such as the scattering cross section and chemical reaction rates. However, most theoretical work and experimental demonstrations have focused on the quantum computation of energies and energy surfaces. In this work, we attempt to make the prethreshold (not error-corrected) quantum simulation of dynamical properties practical as well. We show that the use of precomputedmore » potential energy surfaces and couplings enables the gate-based simulation of few-channel but otherwise realistic molecular collisions. Our approach is based on the widely used Born–Oppenheimer approximation for the structure problem coupled with a semiclassical method for the dynamics. In the latter the electrons are treated quantum mechanically but the nuclei are classical, which restricts the collisions to high energy or temperature (typically above ≈10 eV). By using operator splitting techniques optimized for the resulting time-dependent Hamiltonian simulation problem, we give several physically realistic collision examples, with 3–8 channels and circuit depths < 1000.« less
NASA Astrophysics Data System (ADS)
Sornborger, Andrew T.; Stancil, Phillip; Geller, Michael R.
2018-05-01
One of the most promising applications of an error-corrected universal quantum computer is the efficient simulation of complex quantum systems such as large molecular systems. In this application, one is interested in both the electronic structure such as the ground state energy and dynamical properties such as the scattering cross section and chemical reaction rates. However, most theoretical work and experimental demonstrations have focused on the quantum computation of energies and energy surfaces. In this work, we attempt to make the prethreshold (not error-corrected) quantum simulation of dynamical properties practical as well. We show that the use of precomputed potential energy surfaces and couplings enables the gate-based simulation of few-channel but otherwise realistic molecular collisions. Our approach is based on the widely used Born-Oppenheimer approximation for the structure problem coupled with a semiclassical method for the dynamics. In the latter the electrons are treated quantum mechanically but the nuclei are classical, which restricts the collisions to high energy or temperature (typically above ≈ 10 eV). By using operator splitting techniques optimized for the resulting time-dependent Hamiltonian simulation problem, we give several physically realistic collision examples, with 3-8 channels and circuit depths < 1000.
Learning physics in a water park
NASA Astrophysics Data System (ADS)
Cabeza, Cecilia; Rubido, Nicolás; Martí, Arturo C.
2014-03-01
Entertaining and educational experiments that can be conducted in a water park, illustrating physics concepts, principles and fundamental laws, are described. These experiments are suitable for students ranging from senior secondary school to junior university level. Newton’s laws of motion, Bernoulli’s equation, based on the conservation of energy, buoyancy, linear and non-linear wave propagation, turbulence, thermodynamics, optics and cosmology are among the topics that can be discussed. Commonly available devices like smartphones, digital cameras, laptop computers and tablets, can be used conveniently to enable accurate calculation and a greater degree of engagement on the part of students.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
Spectral-Lagrangian methods for collisional models of non-equilibrium statistical states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamba, Irene M.; Tharkabhushanam, Sri Harsha
We propose a new spectral Lagrangian based deterministic solver for the non-linear Boltzmann transport equation (BTE) in d-dimensions for variable hard sphere (VHS) collision kernels with conservative or non-conservative binary interactions. The method is based on symmetries of the Fourier transform of the collision integral, where the complexity in its computation is reduced to a separate integral over the unit sphere S{sup d-1}. The conservation of moments is enforced by Lagrangian constraints. The resulting scheme, implemented in free space, is very versatile and adjusts in a very simple manner to several cases that involve energy dissipation due to local micro-reversibilitymore » (inelastic interactions) or elastic models of slowing down process. Our simulations are benchmarked with available exact self-similar solutions, exact moment equations and analytical estimates for the homogeneous Boltzmann equation, both for elastic and inelastic VHS interactions. Benchmarking of the simulations involves the selection of a time self-similar rescaling of the numerical distribution function which is performed using the continuous spectrum of the equation for Maxwell molecules as studied first in Bobylev et al. [A.V. Bobylev, C. Cercignani, G. Toscani, Proof of an asymptotic property of self-similar solutions of the Boltzmann equation for granular materials, Journal of Statistical Physics 111 (2003) 403-417] and generalized to a wide range of related models in Bobylev et al. [A.V. Bobylev, C. Cercignani, I.M. Gamba, On the self-similar asymptotics for generalized non-linear kinetic Maxwell models, Communication in Mathematical Physics, in press. URL: (
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
NASA Astrophysics Data System (ADS)
Mishra, Rohini
Present ultra high power lasers are capable of producing high energy density (HED) plasmas, in controlled way, with a density greater than solid density and at a high temperature of keV (1 keV ˜ 11,000,000° K). Matter in such extreme states is particularly interesting for (HED) physics such as laboratory studies of planetary and stellar astrophysics, laser fusion research, pulsed neutron source etc. To date however, the physics in HED plasma, especially, the energy transport, which is crucial to realize applications, has not been understood well. Intense laser produced plasmas are complex systems involving two widely distinct temperature distributions and are difficult to model by a single approach. Both kinetic and collisional process are equally important to understand an entire process of laser-solid interaction. By implementing atomic physics models, such as collision, ionization, and radiation damping, self consistently, in state-of-the-art particle-in-cell code (PICLS) has enabled to explore the physics involved in the HED plasmas. Laser absorption, hot electron transport, and isochoric heating physics in laser produced hot dense plasmas are studied with a help of PICLS simulations. In particular, a novel mode of electron acceleration, namely DC-ponderomotive acceleration, is identified in the super intense laser regime which plays an important role in the coupling of laser energy to a dense plasma. Geometric effects on hot electron transport and target heating processes are examined in the reduced mass target experiments. Further, pertinent to fast ignition, laser accelerated fast electron divergence and transport in the experiments using warm dense matter (low temperature plasma) is characterized and explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.; Herner, K.; Jayatilaka, B.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
NASA Astrophysics Data System (ADS)
Lewis, Ray A.; Modanese, Giovanni
Vibrating media offer an important testing ground for reconciling conflicts between General Relativity, Quantum Mechanics and other branches of physics. For sources like a Weber bar, the standard covariant formalism for elastic bodies can be applied. The vibrating string, however, is a source of gravitational waves which requires novel computational techniques, based on the explicit construction of a conserved and renormalized energy-momentum tensor. Renormalization (in a classical sense) is necessary to take into account the effect of external constraints, which affect the emission considerably. Our computation also relaxes usual simplifying assumptions like far-field approximation, spherical or plane wave symmetry, TT gauge and absence of internal interference. In a further step towards unification, the method is then adapted to give the radiation field of a transversal Alfven wave in a rarefied astrophysical plasma, where the tension is produced by an external static magnetic field.
Residential Solar Power and the Physics Teacher
NASA Astrophysics Data System (ADS)
Carpenter, David
2007-10-01
The roof of my house sports one of the largest residential photovoltaic arrays in Ohio. It produces all of the electricity for my house and family of four. With state and federal incentives, it cost less to install than the price of a new car. It will pay for itself within the warrantee period. A picture of my house with solar panels is the background on my classroom computer. I am the physics teacher at Hayes High School in Delaware, Ohio. I don't need a formal curriculum. Sooner or later my students start asking questions. They even ask the exact same questions that adults do. The inverter for my PV system sends performance data to my computer. I post this on my website, which takes it into my classroom. This sparks conversation on a whole variety of topics, from sun angles to energy, electricity, technology and climate studies.
Data preservation at the Fermilab Tevatron
Boyd, J.; Herner, K.; Jayatilaka, B.; ...
2015-12-23
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
NASA Astrophysics Data System (ADS)
Hvizdoš, Dávid; Váňa, Martin; Houfek, Karel; Greene, Chris H.; Rescigno, Thomas N.; McCurdy, C. William; Čurík, Roman
2018-02-01
We present a simple two-dimensional model of the indirect dissociative recombination process. The model has one electronic and one nuclear degree of freedom and it can be solved to high precision, without making any physically motivated approximations, by employing the exterior complex scaling method together with the finite-elements method and discrete variable representation. The approach is applied to solve a model for dissociative recombination of H2 + in the singlet ungerade channels, and the results serve as a benchmark to test validity of several physical approximations commonly used in the computational modeling of dissociative recombination for real molecular targets. The second, approximate, set of calculations employs a combination of multichannel quantum defect theory and frame transformation into a basis of Siegert pseudostates. The cross sections computed with the two methods are compared in detail for collision energies from 0 to 2 eV.
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.
2015-12-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1979-01-01
Lumped volume dynamic equations are derived using an energy state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1979-01-01
Lumped volume dynamic equations are derived using an energy-state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is also formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free-piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.
Life sciences and environmental sciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-02-01
The DOE laboratories play a unique role in bringing multidisciplinary talents -- in biology, physics, chemistry, computer sciences, and engineering -- to bear on major problems in the life and environmental sciences. Specifically, the laboratories utilize these talents to fulfill OHER's mission of exploring and mitigating the health and environmental effects of energy use, and of developing health and medical applications of nuclear energy-related phenomena. At Lawrence Berkeley Laboratory (LBL) support of this mission is evident across the spectrum of OHER-sponsored research, especially in the broad areas of genomics, structural biology, basic cell and molecular biology, carcinogenesis, energy and environment,more » applications to biotechnology, and molecular, nuclear and radiation medicine. These research areas are briefly described.« less
NASA Astrophysics Data System (ADS)
Fauzi, Ahmad
2017-11-01
Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.
NASA Astrophysics Data System (ADS)
Slatyer, Tracy R.
2016-01-01
Any injection of electromagnetically interacting particles during the cosmic dark ages will lead to increased ionization, heating, production of Lyman-α photons and distortions to the energy spectrum of the cosmic microwave background, with potentially observable consequences. In this paper we describe numerical results for the low-energy electrons and photons produced by the cooling of particles injected at energies from keV to multi-TeV scales, at arbitrary injection redshifts (but focusing on the post-recombination epoch). We use these data, combined with existing calculations modeling the cooling of these low-energy particles, to estimate the resulting contributions to ionization, excitation and heating of the gas, and production of low-energy photons below the threshold for excitation and ionization. We compute corrected deposition-efficiency curves for annihilating dark matter, and demonstrate how to compute equivalent curves for arbitrary energy-injection histories. These calculations provide the necessary inputs for the limits on dark matter annihilation presented in the accompanying paper I, but also have potential applications in the context of dark matter decay or deexcitation, decay of other metastable species, or similar energy injections from new physics. We make our full results publicly available at http://nebel.rc.fas.harvard.edu/epsilon, to facilitate further independent studies. In particular, we provide the full low-energy electron and photon spectra, to allow matching onto more detailed codes that describe the cooling of such particles at low energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek
Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less
Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...
2017-04-24
Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less
Modeling and Simulation of Explosively Driven Electromechanical Devices
NASA Astrophysics Data System (ADS)
Demmie, Paul N.
2002-07-01
Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munro, J.K. Jr.
1980-05-01
The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone whomore » wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables.« less
Index to NASA Tech Briefs, 1974
NASA Technical Reports Server (NTRS)
1975-01-01
The following information was given for 1974: (1) abstracts of reports dealing with new technology derived from the research and development activities of NASA or the U.S. Atomic Energy Commission, arranged by subjects: electronics/electrical, electronics/electrical systems, physical sciences, materials/chemistry, life sciences, mechanics, machines, equipment and tools, fabrication technology, and computer programs, (2) indexes for the above documents: subject, personal author, originating center.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Yulia
2018-02-01
Stages of direct computational experiments in hydromechanics based on tensor mathematics tools are represented by conditionally independent mathematical models for calculations separation in accordance with physical processes. Continual stage of numerical modeling is constructed on a small time interval in a stationary grid space. Here coordination of continuity conditions and energy conservation is carried out. Then, at the subsequent corpuscular stage of the computational experiment, kinematic parameters of mass centers and surface stresses at the boundaries of the grid cells are used in modeling of free unsteady motions of volume cells that are considered as independent particles. These particles can be subject to vortex and discontinuous interactions, when restructuring of free boundaries and internal rheological states has place. Transition from one stage to another is provided by interpolation operations of tensor mathematics. Such interpolation environment formalizes the use of physical laws for mechanics of continuous media modeling, provides control of rheological state and conditions for existence of discontinuous solutions: rigid and free boundaries, vortex layers, their turbulent or empirical generalizations.
The performance of low-cost commercial cloud computing as an alternative in computational chemistry.
Thackston, Russell; Fortenberry, Ryan C
2015-05-05
The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.
Gomez, Luis J; Goetz, Stefan M; Peterchev, Angel V
2018-08-01
Transcranial magnetic stimulation (TMS) is a noninvasive brain stimulation technique used for research and clinical applications. Existent TMS coils are limited in their precision of spatial targeting (focality), especially for deeper targets. This paper presents a methodology for designing TMS coils to achieve optimal trade-off between the depth and focality of the induced electric field (E-field), as well as the energy required by the coil. A multi-objective optimization technique is used for computationally designing TMS coils that achieve optimal trade-offs between E-field focality, depth, and energy (fdTMS coils). The fdTMS coil winding(s) maximize focality (minimize the volume of the brain region with E-field above a given threshold) while reaching a target at a specified depth and not exceeding predefined peak E-field strength and required coil energy. Spherical and MRI-derived head models are used to compute the fundamental depth-focality trade-off as well as focality-energy trade-offs for specific target depths. Across stimulation target depths of 1.0-3.4 cm from the brain surface, the suprathreshold volume can be theoretically decreased by 42%-55% compared to existing TMS coil designs. The suprathreshold volume of a figure-8 coil can be decreased by 36%, 44%, or 46%, for matched, doubled, or quadrupled energy. For matched focality and energy, the depth of a figure-8 coil can be increased by 22%. Computational design of TMS coils could enable more selective targeting of the induced E-field. The presented results appear to be the first significant advancement in the depth-focality trade-off of TMS coils since the introduction of the figure-8 coil three decades ago, and likely represent the fundamental physical limit.
Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carl R. Sovinec
The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior—not unlike computational weather prediction—to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effectsmore » and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU_DIST software library [http://crd.lbl.gov/~xiaoye/SuperLU/] for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD’s performance by a factor of five in typical large nonlinear simulations, which has been publicized as a success story of SciDAC-fostered collaboration. Furthermore, the SuperLU software does not assume any mathematical symmetry, and its generality provides an important capability for extending the physical model beyond magnetohydrodynamics (MHD). With respect to algorithmic and model development, our most significant accomplishment is the development of a new method for solving plasma models that treat electrons as an independent plasma component. These ‘two-fluid’ models encompass MHD and add temporal and spatial scales that are beyond the response of the ion species. Implementation and testing of a previously published algorithm did not prove successful for NIMROD, and the new algorithm has since been devised, analyzed, and implemented. Two-fluid modeling, an important objective of the original NIMROD project, is now routine in 2D applications. Algorithmic components for 3D modeling are in place and tested; though, further computational work is still needed for efficiency. Other algorithmic work extends the ion-fluid stress tensor to include models for parallel and gyroviscous stresses. In addition, our hot-particle simulation capability received important refinements that permitted completion of a benchmark with the M3D code. A highlight of our applications work is the edge-localized mode (ELM) modeling, which was part of the first-ever computational Performance Target for the DOE Office of Fusion Energy Science, see http://www.science.doe.gov/ofes/performancetargets.shtml. Our efforts allowed MHD simulations to progress late into the nonlinear stage, where energy is conducted to the wall location. They also produced a two-fluid ELM simulation starting from experimental information and demonstrating critical drift effects that are characteristic of two-fluid physics. Another important application is the internal kink mode in a tokamak. Here, the primary purpose of the study has been to benchmark the two main code development lines of CEMM, NIMROD and M3D, on a relevant nonlinear problem. Results from the two codes show repeating nonlinear relaxation events driven by the kink mode over quantitatively comparable timescales. The work has launched a more comprehensive nonlinear benchmarking exercise, where realistic transport effects have an important role.« less
Material model for physically based rendering
NASA Astrophysics Data System (ADS)
Robart, Mathieu; Paulin, Mathias; Caubet, Rene
1999-09-01
In computer graphics, a complete knowledge of the interactions between light and a material is essential to obtain photorealistic pictures. Physical measurements allow us to obtain data on the material response, but are limited to industrial surfaces and depend on measure conditions. Analytic models do exist, but they are often inadequate for common use: the empiric ones are too simple to be realistic, and the physically-based ones are often to complex or too specialized to be generally useful. Therefore, we have developed a multiresolution virtual material model, that not only describes the surface of a material, but also its internal structure thanks to distribution functions of microelements, arranged in layers. Each microelement possesses its own response to an incident light, from an elementary reflection to a complex response provided by its inner structure, taking into account geometry, energy, polarization, . . ., of each light ray. This model is virtually illuminated, in order to compute its response to an incident radiance. This directional response is stored in a compressed data structure using spherical wavelets, and is destined to be used in a rendering model such as directional radiosity.
T.D.S. spectroscopic databank for spherical tops: DOS version
NASA Astrophysics Data System (ADS)
Tyuterev, V. G.; Babikov, Yu. L.; Tashkun, S. A.; Perevalov, V. I.; Nikitin, A.; Champion, J.-P.; Wenger, C.; Pierre, C.; Pierre, G.; Hilico, J.-C.; Loete, M.
1994-10-01
T.D.S. (Traitement de Donnees Spectroscopiques or Tomsk-Dijon-Spectroscopy project) is a computer package concerned with high resolution spectroscopy of spherical top molecules like CH4, CF4, SiH4, SiF4, SnH4, GeH4, SF6, etc. T.D.S. contains information, fundamental spectroscopic data (energies, transition moments, spectroscopic constants) recovered from comprehensive modeling and simultaneous fitting of experimental spectra, and associated software written in C. The T.D.S. goal is to provide an access to all available information on vibration-rotation molecular states and transitions including various spectroscopic processes (Stark, Raman, etc.) under extended conditions based on extrapolations of laboratory measurements using validated theoretical models. Applications for T.D.S. may include: education/training in molecular physics, quantum chemistry, laser physics; spectroscopic applications (analysis, laser spectroscopy, atmospheric optics, optical standards, spectroscopic atlases); applications to environment studies and atmospheric physics (remote sensing); data supply for specific databases; and to photochemistry (laser excitation, multiphoton processes). The reported DOS-version is designed for IBM and compatible personal computers.
PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010)
NASA Astrophysics Data System (ADS)
Lin, Simon C.; Shen, Stella; Neufeld, Niko; Gutsche, Oliver; Cattaneo, Marco; Fisk, Ian; Panzer-Steindel, Bernd; Di Meglio, Alberto; Lokajicek, Milos
2011-12-01
The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at Academia Sinica in Taipei from 18-22 October 2010. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18 month intervals, alternating between Europe, Asia, America and other parts of the world. Recent CHEP conferences have been held in Prauge, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, California(2003); Beijing, China (2001); Padova, Italy (2000) CHEP 2010 was organized by Academia Sinica Grid Computing Centre. There was an International Advisory Committee (IAC) setting the overall themes of the conference, a Programme Committee (PC) responsible for the content, as well as Conference Secretariat responsible for the conference infrastructure. There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 260 oral and 200 poster presentations, and industrial exhibitions. We thank all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Engineering, Data Stores, and Databases, Distributed Processing and Analysis, Computing Fabrics and Networking Technologies, Grid and Cloud Middleware, and Collaborative Tools. The conference included excursions to various attractions in Northern Taiwan, including Sanhsia Tsu Shih Temple, Yingko, Chiufen Village, the Northeast Coast National Scenic Area, Keelung, Yehliu Geopark, and Wulai Aboriginal Village, as well as two banquets held at the Grand Hotel and Grand Formosa Regent in Taipei. The next CHEP conference will be held in New York, the United States on 21-25 May 2012. We would like to thank the National Science Council of Taiwan, the EU ACEOLE project, commercial sponsors, and the International Advisory Committee and the Programme Committee members for all their support and help. Special thanks to the Programme Committee members for their careful choice of conference contributions and enormous effort in reviewing and editing about 340 post conference proceedings papers. Simon C Lin CHEP 2010 Conference Chair and Proceedings Editor Taipei, Taiwan November 2011 Track Editors/ Programme Committee Chair Simon C Lin, Academia Sinica, Taiwan Online Computing Track Y H Chang, National Central University, Taiwan Harry Cheung, Fermilab, USA Niko Neufeld, CERN, Switzerland Event Processing Track Fabio Cossutti, INFN Trieste, Italy Oliver Gutsche, Fermilab, USA Ryosuke Itoh, KEK, Japan Software Engineering, Data Stores, and Databases Track Marco Cattaneo, CERN, Switzerland Gang Chen, Chinese Academy of Sciences, China Stefan Roiser, CERN, Switzerland Distributed Processing and Analysis Track Kai-Feng Chen, National Taiwan University, Taiwan Ulrik Egede, Imperial College London, UK Ian Fisk, Fermilab, USA Fons Rademakers, CERN, Switzerland Torre Wenaus, BNL, USA Computing Fabrics and Networking Technologies Track Harvey Newman, Caltech, USA Bernd Panzer-Steindel, CERN, Switzerland Antonio Wong, BNL, USA Ian Fisk, Fermilab, USA Niko Neufeld, CERN, Switzerland Grid and Cloud Middleware Track Alberto Di Meglio, CERN, Switzerland Markus Schulz, CERN, Switzerland Collaborative Tools Track Joao Correia Fernandes, CERN, Switzerland Philippe Galvez, Caltech, USA Milos Lokajicek, FZU Prague, Czech Republic International Advisory Committee Chair: Simon C. Lin , Academia Sinica, Taiwan Members: Mohammad Al-Turany , FAIR, Germany Sunanda Banerjee, Fermilab, USA Dario Barberis, CERN & Genoa University/INFN, Switzerland Lothar Bauerdick, Fermilab, USA Ian Bird, CERN, Switzerland Amber Boehnlein, US Department of Energy, USA Kors Bos, CERN, Switzerland Federico Carminati, CERN, Switzerland Philippe Charpentier, CERN, Switzerland Gang Chen, Institute of High Energy Physics, China Peter Clarke, University of Edinburgh, UK Michael Ernst, Brookhaven National Laboratory, USA David Foster, CERN, Switzerland Merino Gonzalo, CIEMAT, Spain John Gordon, STFC-RAL, UK Volker Guelzow, Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany John Harvey, CERN, Switzerland Frederic Hemmer, CERN, Switzerland Hafeez Hoorani, NCP, Pakistan Viatcheslav Ilyin, Moscow State University, Russia Matthias Kasemann, DESY, Germany Nobuhiko Katayama, KEK, Japan Milos Lokajícek, FZU Prague, Czech Republic David Malon, ANL, USA Pere Mato Vila, CERN, Switzerland Mirco Mazzucato, INFN CNAF, Italy Richard Mount, SLAC, USA Harvey Newman, Caltech, USA Mitsuaki Nozaki, KEK, Japan Farid Ould-Saada, University of Oslo, Norway Ruth Pordes, Fermilab, USA Hiroshi Sakamoto, The University of Tokyo, Japan Alberto Santoro, UERJ, Brazil Jim Shank, Boston University, USA Alan Silverman, CERN, Switzerland Randy Sobie , University of Victoria, Canada Dongchul Son, Kyungpook National University, South Korea Reda Tafirout , TRIUMF, Canada Victoria White, Fermilab, USA Guy Wormser, LAL, France Frank Wuerthwein, UCSD, USA Charles Young, SLAC, USA
Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...
2015-03-09
Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less
Towards prediction of correlated material properties using quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Wagner, Lucas
Correlated electron systems offer a richness of physics far beyond noninteracting systems. If we would like to pursue the dream of designer correlated materials, or, even to set a more modest goal, to explain in detail the properties and effective physics of known materials, then accurate simulation methods are required. Using modern computational resources, quantum Monte Carlo (QMC) techniques offer a way to directly simulate electron correlations. I will show some recent results on a few extremely challenging materials including the metal-insulator transition of VO2, the ground state of the doped cuprates, and the pressure dependence of magnetic properties in FeSe. By using a relatively simple implementation of QMC, at least some properties of these materials can be described truly from first principles, without any adjustable parameters. Using the QMC platform, we have developed a way of systematically deriving effective lattice models from the simulation. This procedure is particularly attractive for correlated electron systems because the QMC methods treat the one-body and many-body components of the wave function and Hamiltonian on completely equal footing. I will show some examples of using this downfolding technique and the high accuracy of QMC to connect our intuitive ideas about interacting electron systems with high fidelity simulations. The work in this presentation was supported in part by NSF DMR 1206242, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number FG02-12ER46875, and the Center for Emergent Superconductivity, Department of Energy Frontier Research Center under Grant No. DEAC0298CH1088. Computing resources were provided by a Blue Waters Illinois grant and INCITE PhotSuper and SuperMatSim allocations.
Radiation Physics for Space and High Altitude Air Travel
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Wilson, J. W.; Goldhagen, P.; Saganti, P.; Shavers, M. R.; McKay, Gordon A. (Technical Monitor)
2000-01-01
Galactic cosmic rays (GCR) are of extra-solar origin consisting of high-energy hydrogen, helium, and heavy ions. The GCR are modified by physical processes as they traverse through the solar system, spacecraft shielding, atmospheres, and tissues producing copious amounts of secondary radiation including fragmentation products, neutrons, mesons, and muons. We discuss physical models and measurements relevant for estimating biological risks in space and high-altitude air travel. Ambient and internal spacecraft computational models for the International Space Station and a Mars mission are discussed. Risk assessment is traditionally based on linear addition of components. We discuss alternative models that include stochastic treatments of columnar damage by heavy ion tracks and multi-cellular damage following nuclear fragmentation in tissue.
Multi-physics optimization of three-dimensional microvascular polymeric components
NASA Astrophysics Data System (ADS)
Aragón, Alejandro M.; Saksena, Rajat; Kozola, Brian D.; Geubelle, Philippe H.; Christensen, Kenneth T.; White, Scott R.
2013-01-01
This work discusses the computational design of microvascular polymeric materials, which aim at mimicking the behavior found in some living organisms that contain a vascular system. The optimization of the topology of the embedded three-dimensional microvascular network is carried out by coupling a multi-objective constrained genetic algorithm with a finite-element based physics solver, the latter validated through experiments. The optimization is carried out on multiple conflicting objective functions, namely the void volume fraction left by the network, the energy required to drive the fluid through the network and the maximum temperature when the material is subjected to thermal loads. The methodology presented in this work results in a viable alternative for the multi-physics optimization of these materials for active-cooling applications.
On the Violence of High Explosive Reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarver, C M; Chidester, S K
High explosive reactions can be caused by three general energy deposition processes: impact ignition by frictional and/or shear heating; bulk thermal heating; and shock compression. The violence of the subsequent reaction varies from benign slow combustion to catastrophic detonation of the entire charge. The degree of violence depends on many variables, including the rate of energy delivery, the physical and chemical properties of the explosive, and the strength of the confinement surrounding the explosive charge. The current state of experimental and computer modeling research on the violence of impact, thermal, and shock-induced reactions is reviewed.
Paesani, Francesco
2016-09-20
The central role played by water in fundamental processes relevant to different disciplines, including chemistry, physics, biology, materials science, geology, and climate research, cannot be overemphasized. It is thus not surprising that, since the pioneering work by Stillinger and Rahman, many theoretical and computational studies have attempted to develop a microscopic description of the unique properties of water under different thermodynamic conditions. Consequently, numerous molecular models based on either molecular mechanics or ab initio approaches have been proposed over the years. However, despite continued progress, the correct prediction of the properties of water from small gas-phase clusters to the liquid phase and ice through a single molecular model remains challenging. To large extent, this is due to the difficulties encountered in the accurate modeling of the underlying hydrogen-bond network in which both number and strength of the hydrogen bonds vary continuously as a result of a subtle interplay between energetic, entropic, and nuclear quantum effects. In the past decade, the development of efficient algorithms for correlated electronic structure calculations of small molecular complexes, accompanied by tremendous progress in the analytical representation of multidimensional potential energy surfaces, opened the doors to the design of highly accurate potential energy functions built upon rigorous representations of the many-body expansion (MBE) of the interaction energies. This Account provides a critical overview of the performance of the MB-pol many-body potential energy function through a systematic analysis of energetic, structural, thermodynamic, and dynamical properties as well as of vibrational spectra of water from the gas to the condensed phase. It is shown that MB-pol achieves unprecedented accuracy across all phases of water through a quantitative description of each individual term of the MBE, with a physically correct representation of both short- and long-range many-body contributions. Comparisons with experimental data probing different regions of the water potential energy surface from clusters to bulk demonstrate that MB-pol represents a major step toward the long-sought-after "universal model" capable of accurately describing the molecular properties of water under different conditions and in different environments. Along this path, future challenges include the extension of the many-body scheme adopted by MB-pol to the description of generic solutes as well as the integration of MB-pol in an efficient theoretical and computational framework to model acid-base reactions in aqueous environments. In this context, given the nontraditional form of the MB-pol energy and force expressions, synergistic efforts by theoretical/computational chemists/physicists and computer scientists will be critical for the development of high-performance software for many-body molecular dynamics simulations.
The Important Role of Physics in Industry and Economic Development
NASA Astrophysics Data System (ADS)
Alvarado, Igor
2012-10-01
Good Physics requires good education. Good education translates into good Physics professionals. The process starts early with Science, Technology, Engineering and Mathematics (STEM) education programs for Middle and High-School students. Then it continues with competitive higher education programs (2 years and 4 years) at colleges and universities designed to satisfy the needs of industry and academia. The research work conducted by graduate students in Physics (and Engineering Physics) frequently translates into new discoveries and innovations that have direct impact in society (e.g. Proton Cancer Therapy). Some of the major and largest scientific experiments in the world today are physics-centered (e.g. Large Hadron Collider-LHC) that generate employment and business opportunities for thousands of scientists, academic research groups and companies from around the world. New superconducting magnets and advanced materials that have resulted from previous research in physics are commonly used in these extreme experiments. But not all physicists will end up working at these large high-energy physics experiments, universities or National Laboratories (e.g. Fermilab); industry requires new generations of (industrial) physicists in such sectors as semiconductor, energy, space, life sciences, defense and advanced manufacturing. This work presents an industry perspective about the role of Physics in economic development and the need for a collaborative Academic-Industry approach for a more effective translational research. A series of examples will be presented with emphasis in the measurement, control, diagnostics and computing capabilities needed to translate the science (physics) into innovations and practical solutions that can benefit society as a whole.
The energy performance of thermochromic glazing
NASA Astrophysics Data System (ADS)
Diamantouros, Pavlos
This study investigated the energy performance of thermochromic glazing. It was done by simulating the model of a small building in a highly advanced computer program (EnergyPlus - U.S. DOE). The physical attributes of the thermochromic samples examined came from actual laboratory samples fabricated in UCL's Department of Chemistry (Prof I. P. Parkin). It was found that they can substantially reduce cooling loads while requiring the same heating loads as a high end low-e double glazing. The reductions in annual cooling energy required were in the 20%-40% range depending on sample, location and building layout. A series of sensitivity analyses showed the importance of switching temperature and emissivity factor in the performance of the glazing. Finally an ideal pane was designed to explore the limits this technology has to offer.
A Hierarchical Approach to Fracture Mechanics
NASA Technical Reports Server (NTRS)
Saether, Erik; Taasan, Shlomo
2004-01-01
Recent research conducted under NASA LaRC's Creativity and Innovation Program has led to the development of an initial approach for a hierarchical fracture mechanics. This methodology unites failure mechanisms occurring at different length scales and provides a framework for a physics-based theory of fracture. At the nanoscale, parametric molecular dynamic simulations are used to compute the energy associated with atomic level failure mechanisms. This information is used in a mesoscale percolation model of defect coalescence to obtain statistics of fracture paths and energies through Monte Carlo simulations. The mathematical structure of predicted crack paths is described using concepts of fractal geometry. The non-integer fractal dimension relates geometric and energy measures between meso- and macroscales. For illustration, a fractal-based continuum strain energy release rate is derived for inter- and transgranular fracture in polycrystalline metals.
Moubarac, Jean-Claude; Receveur, Olivier; Cargo, Margaret; Daniel, Mark
2014-02-01
The present study describes the consumption patterns of sweetened food and drink products in a Catholic Middle Eastern Canadian community and examines its associations with physical activity, sedentary behaviours and BMI. A two-stage cross-sectional design was used. In Stage 1 (n 42), 24 h recalls enabled the identification of sweetened products. In Stage 2 (n 192), an FFQ was administered to measure the daily consumption of these products and to collect sociodemographic and behavioural data. Sweetened products were defined as processed culinary ingredients and ultra-processed products for which total sugar content exceeded 20% of total energy. Three Catholic Middle Eastern churches located in Montreal, Canada. Normoglycaemic men and women (18-60 years old). Twenty-six sweetened products represented an average consumption of 75·4 g total sugars/d or 15·1% of daily energy intake (n 190, 56% women). Soft drinks, juices, sweetened coffee, chocolate, cookies, cakes and muffins were the main sources of consumption and mostly consumed between meals. Age (exp (β) = 0·99; P < 0·01), physical activity (exp (β) = 1·08; P < 0·01) and recreational computer use (exp (β) = 1·17; P < 0·01) were independently associated with sweetened product consumption. The association between sweetened product consumption and physical activity was U-shaped. BMI was not significantly associated with sweetened product consumption but all participants regardless of BMI were above the WHO recommendation for free sugars. Being physically active and spending less time using a computer may favour a reduced consumption of sweetened products. Very active individuals may, however, overconsume such products.
Fusion plasma theory project summaries
NASA Astrophysics Data System (ADS)
1993-10-01
This Project Summary book is a published compilation consisting of short descriptions of each project supported by the Fusion Plasma Theory and Computing Group of the Advanced Physics and Technology Division of the Department of Energy, Office of Fusion Energy. The summaries contained in this volume were written by the individual contractors with minimal editing by the Office of Fusion Energy. Previous summaries were published in February of 1982 and December of 1987. The Plasma Theory program is responsible for the development of concepts and models that describe and predict the behavior of a magnetically confined plasma. Emphasis is given to the modelling and understanding of the processes controlling transport of energy and particles in a toroidal plasma and supporting the design of the International Thermonuclear Experimental Reactor (ITER). A tokamak transport initiative was begun in 1989 to improve understanding of how energy and particles are lost from the plasma by mechanisms that transport them across field lines. The Plasma Theory program has actively participated in this initiative. Recently, increased attention has been given to issues of importance to the proposed Tokamak Physics Experiment (TPX). Particular attention has been paid to containment and thermalization of fast alpha particles produced in a burning fusion plasma as well as control of sawteeth, current drive, impurity control, and design of improved auxiliary heating. In addition, general models of plasma behavior are developed from physics features common to different confinement geometries. This work uses both analytical and numerical techniques. The Fusion Theory program supports research projects at U.S. government laboratories, universities and industrial contractors. Its support of theoretical work at universities contributes to the office of Fusion Energy mission of training scientific manpower for the U.S. Fusion Energy Program.
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
Zerkin, V. V.; Pritychenko, B.
2018-02-04
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
NASA Astrophysics Data System (ADS)
Zerkin, V. V.; Pritychenko, B.
2018-04-01
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, Charles; Bell, Greg; Canon, Shane
The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SCmore » organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.« less
Fast adaptive flat-histogram ensemble to enhance the sampling in large systems
NASA Astrophysics Data System (ADS)
Xu, Shun; Zhou, Xin; Jiang, Yi; Wang, YanTing
2015-09-01
An efficient novel algorithm was developed to estimate the Density of States (DOS) for large systems by calculating the ensemble means of an extensive physical variable, such as the potential energy, U, in generalized canonical ensembles to interpolate the interior reverse temperature curve , where S( U) is the logarithm of the DOS. This curve is computed with different accuracies in different energy regions to capture the dependence of the reverse temperature on U without setting prior grid in the U space. By combining with a U-compression transformation, we decrease the computational complexity from O( N 3/2) in the normal Wang Landau type method to O( N 1/2) in the current algorithm, as the degrees of freedom of system N. The efficiency of the algorithm is demonstrated by applying to Lennard Jones fluids with various N, along with its ability to find different macroscopic states, including metastable states.
Unified interatomic potential and energy barrier distributions for amorphous oxides.
Trinastic, J P; Hamdan, R; Wu, Y; Zhang, L; Cheng, Hai-Ping
2013-10-21
Amorphous tantala, titania, and hafnia are important oxides for biomedical implants, optics, and gate insulators. Understanding the effects of oxide doping is crucial to optimize performance in these applications. However, no molecular dynamics potentials have been created to date that combine these and other oxides that would allow computational analyses of doping-dependent structural and mechanical properties. We report a novel set of computationally efficient, two-body potentials modeling van der Waals and covalent interactions that reproduce the structural and elastic properties of both pure and doped amorphous oxides. In addition, we demonstrate that the potential accurately produces energy barrier distributions for pure and doped samples. The distributions can be directly compared to experiment and used to calculate physical quantities such as internal friction to understand how doping affects material properties. Future analyses using these potentials will be of great value to determine optimal doping concentrations and material combinations for myriad material science applications.
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerkin, V. V.; Pritychenko, B.
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less
NASA Astrophysics Data System (ADS)
Cheok, Adrian David
This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
Ulijaszek, Stanley J; Koziel, Slawomir
2007-12-01
After the economic transition of the late 1980s and early 1990s there was a rapid increase in overweight and obesity in many countries of Eastern Europe. This article describes changing availability of dietary energy from major dietary components since the transition to free-market economic systems among Eastern European nations, using food balance data obtained at national level for the years 1990-92 and 2005 from the FAOSTAT-Nutrition database. Dietary energy available to the East European nations satellite to the former Soviet Union (henceforth, Eastern Europe) was greater than in the nations of the former Soviet Union. Among the latter, the Western nations of the former Soviet Union had greater dietary energy availability than the Eastern and Southern nations of the former Soviet Union. The higher energy availability in Eastern Europe relative to the nations of the former Soviet Union consists mostly of high-protein foods. There has been no significant change in overall dietary energy availability to any category of East European nation between 1990-1992 and 2005, indicating that, at the macro-level, increasing rates of obesity in Eastern European countries cannot be attributed to increased dietary energy availability. The most plausible macro-level explanations for the obesity patterns observed in East European nations are declines in physical activity, increased real income, and increased consumption of goods that contribute to physical activity decline: cars, televisions and computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Daniel J.; Lee, Choonsik; Tien, Christopher
2013-01-15
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and amore » 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT examinations on the Siemens SOMATOM Sensation 16 scanner.« less
Publications of LASL research, 1975
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, A.K.
1976-09-01
This bibliography lists unclassified 1975 publications of work done at the Los Alamos Scientific Laboratory and those earlier publications that were received too late for inclusion in earlier compilations. Papers published in 1975 are included regardless of when they were actually written. Declassification of previously classified reports is considered to constitute publication. All classified issuances are omitted. The bibliography includes Los Alamos Scientific Laboratory reports, papers released as non-Los Alamos reports, journal articles, books, chapters of books, conference papers (whether published separately or as part of conference proceedings issued as books or reports), papers published in congressional hearings, theses, andmore » U.S. Patents. Publications by LASL authors which are not records of Laboratory-sponsored work are included when the Library becomes aware of them. The entries are arranged in sections by the following broad subject categories: aerospace studies; analytical technology; astrophysics; atomic and molecular physics, equation of state, opacity; biology and medicine; chemical dynamics and kinetics; chemistry; cryogenics; crystallography; CTR and plasma physics; earth science and engineering; energy (nonnuclear); engineering and equipment; EPR, ESR, NMR studies; explosives and detonations; fission physics; health and safety; hydrodynamics and radiation transport; instruments; lasers; mathematics and computers; medium-energy physics; metallurgy and ceramics technology; neutronics and criticality studies; nuclear physics; nuclear safeguards; physics; reactor technology; solid state science; and miscellaneous (including Project Rover). Author, numerical, and KWIC indexes are included. (RWR)« less
Design and evaluation of a hybrid storage system in HEP environment
NASA Astrophysics Data System (ADS)
Xu, Qi; Cheng, Yaodong; Chen, Gang
2017-10-01
Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.
High-fidelity plasma codes for burn physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooley, James; Graziani, Frank; Marinak, Marty
Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less
Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Grossman, JC
2014-12-01
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastablemore » structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.« less
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
Beyond the benzene dimer: an investigation of the additivity of pi-pi interactions.
Tauer, Tony P; Sherrill, C David
2005-11-24
The benzene dimer is the simplest prototype of pi-pi interactions and has been used to understand the fundamental physics of these interactions as they are observed in more complex systems. In biological systems, however, aromatic rings are rarely found in isolated pairs; thus, it is important to understand whether aromatic pairs remain a good model of pi-pi interactions in clusters. In this study, ab initio methods are used to compute the binding energies of several benzene trimers and tetramers, most of them in 1D stacked configurations. The two-body terms change only slightly relative to the dimer, and except for the cyclic trimer, the three- and four-body terms are negligible. This indicates that aromatic clusters do not feature any large nonadditive effects in their binding energies, and polarization effects in benzene clusters do not greatly change the binding that would be anticipated from unperturbed benzene-benzene interactions, at least for the 1D stacked systems considered. Three-body effects are larger for the cyclic trimer, but for all systems considered, the computed binding energies are within 10% of what would be estimated from benzene dimer energies at the same geometries.
Patel, A; Jameson, K A; Edwards, M H; Ward, K; Gale, C R; Cooper, C; Dennison, Elaine M
2018-04-24
This study investigated the association between mild cognitive impairment (MCI) and physical function and bone health in older adults. MCI was associated with poor physical performance but not bone mineral density or bone microarchitecture. Cross-sectional study to investigate the association between mild cognitive impairment (MCI) and physical performance, and bone health, in a community-dwelling cohort of older adults. Cognitive function of 222 men and 221 women (mean age 75.5 and 75.8 years in men and women, respectively) was assessed by the Strawbridge questionnaire and Mini Mental State Exam (MMSE). Participants underwent dual-energy X-ray absorptiometry (DXA), peripheral-quantitative computed tomography (pQCT) and high-resolution peripheral-quantitative computed tomography (HR-pQCT) scans to assess their bone density, strength and microarchitecture. Their physical function was assessed and a physical performance (PP) score was recorded. In the study, 11.8% of women and 8.1% of men were cognitively impaired on the MMSE (score < 24). On the Strawbridge questionnaire, 24% of women were deemed cognitively impaired compared to 22.3% of men. Cognitive impairment on the Strawbridge questionnaire was associated with poorer physical performance score in men but not in women in the unadjusted analysis. MMSE < 24 was strongly associated with the risk of low physical performance in men (OR 12.9, 95% CI 1.67, 99.8, p = 0.01). Higher MMSE score was associated with better physical performance in both sexes. Poorer cognitive function, whether assessed by the Strawbridge questionnaire, or by MMSE score, was not associated with bone density, shape or microarchitecture, in either sex. MCI in older adults was associated with poor physical performance, but not bone density, shape or microarchitecture.
Evaluation of methods to assess physical activity
NASA Astrophysics Data System (ADS)
Leenders, Nicole Y. J. M.
Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the measurement of predominantly ambulatory physical activity in free-living healthy persons, the use of accelerometers may be suitable to predict energy expenditure associated with that physical activity. These instruments therefore will either in experimental or non-experimental settings be useful.
NASA Astrophysics Data System (ADS)
Longo, S.; Roney, J. M.
2018-03-01
Pulse shape discrimination using CsI(Tl) scintillators to perform neutral hadron particle identification is explored with emphasis towards application at high energy electron-positron collider experiments. Through the analysis of the pulse shape differences between scintillation pulses from photon and hadronic energy deposits using neutron and proton data collected at TRIUMF, it is shown that the pulse shape variations observed for hadrons can be modelled using a third scintillation component for CsI(Tl), in addition to the standard fast and slow components. Techniques for computing the hadronic pulse amplitudes and shape variations are developed and it is shown that the intensity of the additional scintillation component can be computed from the ionization energy loss of the interacting particles. These pulse modelling and simulation methods are integrated with GEANT4 simulation libraries and the predicted pulse shape for CsI(Tl) crystals in a 5 × 5 array of 5 × 5 × 30 cm3 crystals is studied for hadronic showers from 0.5 and 1 GeV/c KL0 and neutron particles. Using a crystal level and cluster level approach for photon vs. hadron cluster separation we demonstrate proof-of-concept for neutral hadron detection using CsI(Tl) pulse shape discrimination in high energy electron-positron collider experiments.
NASA Astrophysics Data System (ADS)
Wittek, Peter; Calderaro, Luca
2015-12-01
We extended a parallel and distributed implementation of the Trotter-Suzuki algorithm for simulating quantum systems to study a wider range of physical problems and to make the library easier to use. The new release allows periodic boundary conditions, many-body simulations of non-interacting particles, arbitrary stationary potential functions, and imaginary time evolution to approximate the ground state energy. The new release is more resilient to the computational environment: a wider range of compiler chains and more platforms are supported. To ease development, we provide a more extensive command-line interface, an application programming interface, and wrappers from high-level languages.
A study of complex scaling transformation using the Wigner representation of wavefunctions.
Kaprálová-Ždánská, Petra Ruth
2011-05-28
The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics
Toward Computational Design of High-Efficiency Photovoltaics from First-Principles
2016-08-15
dependence of exciton diffusion in conjugated small molecules, Applied Physics Letters, (04 2014): 0. doi: 10.1063/1.4871303 Guangfen Wu, Zi Li, Xu...principle approach based on the time- dependent density functional theory (TDDFT) to describe exciton states, including energy levels and many-body wave... depends more sensitively on the dimension and crystallinity of the acceptor parallel to the interface than normal to the interface. Reorganization
The envelope of ballistic trajectories and elliptic orbits
NASA Astrophysics Data System (ADS)
Butikov, Eugene I.
2015-11-01
Simple geometric derivations are given for the shape of the "safety domain" boundary for the family of Keplerian orbits of equal energy in a central gravitational field and for projectile trajectories in a uniform field. Examples of practical uses of the envelope of the family of orbits are discussed and illustrated by computer simulations. This material is appropriate for physics teachers and undergraduate students studying classical mechanics and orbital motions.
Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications
NASA Technical Reports Server (NTRS)
Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.
2016-01-01
Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.
Medium-induced gluon radiation and colour decoherence beyond the soft approximation
NASA Astrophysics Data System (ADS)
Apolinário, Liliana; Armesto, Néstor; Milhano, José Guilherme; Salgado, Carlos A.
2015-02-01
We derive the in-medium gluon radiation spectrum off a quark within the path integral formalism at finite energies, including all next-to-eikonal corrections in the propagators of quarks and gluons. Results are computed for finite formation times, including interference with vacuum amplitudes. By rewriting the medium averages in a convenient manner we present the spectrum in terms of dipole cross sections and a colour decoherence parameter with the same physical origin as that found in previous studies of the antenna radiation. This factorisation allows us to present a simple physical picture of the medium-induced radiation for any value of the formation time, that is of interest for a probabilistic implementation of the modified parton shower. Known results are recovered for the particular cases of soft radiation and eikonal quark and for the case of a very long medium, with length much larger than the average formation times for medium-induced radiation. Technical details of the computation of the relevant n-point functions in colour space and of the required path integrals in transverse space are provided. The final result completes the calculation of all finite energy corrections for the radiation off a quark in a QCD medium that exist in the small angle approximation and for a recoilless medium.
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.; Barnes, D. C.
2012-03-01
A recent proof-of-principle study proposes an energy- and charge-conserving, fully implicit particle-in-cell algorithm in one dimension [1], which is able to use timesteps comparable to the dynamical timescale of interest. Here, we generalize the method to employ non-uniform meshes via a curvilinear map. The key enabling technology is a hybrid particle pusher [2], with particle positions updated in logical space and particle velocities updated in physical space. The self-adaptive, charge-conserving particle mover of Ref. [1] is extended to the non-uniform mesh case. The fully implicit implementation, using a Jacobian-free Newton-Krylov iterative solver, remains exactly charge- and energy-conserving. The extension of the formulation to multiple dimensions will be discussed. We present numerical experiments of 1D electrostatic, long-timescale ion-acoustic wave and ion-acoustic shock wave simulations, demonstrating that charge and energy are conserved to round-off for arbitrary mesh non-uniformity, and that the total momentum remains well conserved.[4pt] [1] Chen, Chac'on, Barnes, J. Comput. Phys. 230 (2011). [0pt] [2] Camporeale and Delzanno, Bull. Am. Phys. Soc. 56(6) (2011); Wang, et al., J. Plasma Physics, 61 (1999).
Horsch, Antje; Wobmann, Marion; Kriemler, Susi; Munsch, Simone; Borloz, Sylvie; Balz, Alexandra; Marques-Vidal, Pedro; Borghini, Ayala; Puder, Jardena J
2015-02-19
Psychological stress negatively influences food intake and food choices, thereby contributing to the development of childhood obesity. Physical activity can also moderate eating behavior and influence calorie intake. However, it is unknown if acute physical activity influences food intake and overall energy balance after acute stress exposure in children. We therefore investigated the impact of acute physical activity on overall energy balance (food intake minus energy expenditure), food intake, and choice in the setting of acute social stress in normal weight (NW) and overweight/obese (OW/OB) children as well as the impact of psychological risk factors. After receiving written consent from their parents, 26 NW (BMI < 90(th) percentile) and 24 7-to 11-year-old OW (n = 5)/OB (n = 19, BMI ≥ 90(th) percentile) children were randomly allocated using computer-generated numbers (1:1, after stratification for weight status) to acute moderate physical or to sedentary activity for 30 min. Afterwards, all children were exposed to an acute social stressor. Children and their parents completed self-report questionnaires. At the end of the stressor, children were allowed to eat freely from a range of 12 different foods (6 sweet/6 salty; each of low/high caloric density). Energy balance, food intake/choice and obesity-related psychological risk factors were assessed. Lower overall energy balance (p = 0.019) and a decreased choice of low density salty foods (p < 0.001) in NW children compared with OW/OB children was found after acute moderate physical activity but not sedentary activity. Independent of their allocation, OW/OB children ate more high density salty foods (104 kcal (34 to 173), p = 0.004) following stress. They scored higher on impulsive behavior (p = 0.005), restrained eating (p < 0.001) and parental corporal punishment (p = 0.03), but these psychological factors were not related to stress-induced food intake/choice. Positive parenting tended to be related to lower intake of sweet high density food (-132 kcal, -277 to 2, p = 0.054). In the setting of stress, acute moderate physical activity can address energy balance in children, a benefit which is especially pronounced in the OW/OB. Positive parenting may act as a protective factor preventing stress-induced eating of comfort food. clinicaltrials.gov NCT01693926 The study was a pilot study of a project funded by the Swiss National Science Foundation (CRSII3_147673).
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
Wang, Haipeng; Yang, Yushuang; Yang, Jianli; Nie, Yihang; Jia, Jing; Wang, Yudan
2015-01-01
Multiscale nondestructive characterization of coal microscopic physical structure can provide important information for coal conversion and coal-bed methane extraction. In this study, the physical structure of a coal sample was investigated by synchrotron-based multiple-energy X-ray CT at three beam energies and two different spatial resolutions. A data-constrained modeling (DCM) approach was used to quantitatively characterize the multiscale compositional distributions at the two resolutions. The volume fractions of each voxel for four different composition groups were obtained at the two resolutions. Between the two resolutions, the difference for DCM computed volume fractions of coal matrix and pores is less than 0.3%, and the difference for mineral composition groups is less than 0.17%. This demonstrates that the DCM approach can account for compositions beyond the X-ray CT imaging resolution with adequate accuracy. By using DCM, it is possible to characterize a relatively large coal sample at a relatively low spatial resolution with minimal loss of the effect due to subpixel fine length scale structures.
NASA Astrophysics Data System (ADS)
Hansen, U.; Rodgers, S.; Jensen, K. F.
2000-07-01
A general method for modeling ionized physical vapor deposition is presented. As an example, the method is applied to growth of an aluminum film in the presence of an ionized argon flux. Molecular dynamics techniques are used to examine the surface adsorption, reflection, and sputter reactions taking place during ionized physical vapor deposition. We predict their relative probabilities and discuss their dependence on energy and incident angle. Subsequently, we combine the information obtained from molecular dynamics with a line of sight transport model in a two-dimensional feature, incorporating all effects of reemission and resputtering. This provides a complete growth rate model that allows inclusion of energy- and angular-dependent reaction rates. Finally, a level-set approach is used to describe the morphology of the growing film. We thus arrive at a computationally highly efficient and accurate scheme to model the growth of thin films. We demonstrate the capabilities of the model predicting the major differences on Al film topographies between conventional and ionized sputter deposition techniques studying thin film growth under ionized physical vapor deposition conditions with different Ar fluxes.
Solid liquid interfacial free energies of benzene
NASA Astrophysics Data System (ADS)
Azreg-Aı¨nou, M.
2007-02-01
In this work we determine for the range of melting temperatures 284.6⩽T⩽306.7 K, corresponding to equilibrium pressures 20.6⩽P⩽102.9 MPa, the benzene solid-liquid interfacial free energy by a cognitive approach including theoretical and experimental physics, mathematics, computer algebra (MATLAB), and some results from molecular dynamics computer simulations. From a theoretical and mathematical points of view, we deal with the elaboration of an analytical expression for the internal energy derived from a unified solid-liquid-vapor equation of state and with the elaboration of an existing statistical model for the entropy drop of the melt near the solid-liquid interface. From an experimental point of view, we will use our results obtained in collaboration with colleagues concerning the supercooled liquid benzene. Of particular interest for this work is the existing center-of-mass radial distribution function of benzene at 298 K obtained by computer simulation. Crystal-orientation-independent and minimum interfacial free energies are calculated and shown to increase slightly with the above temperatures. Both crystal-orientation-independent and minimum free energies agree with existing calculations and with rare existing experimental data. Taking into account the fact that the extent of supercooling is generally admitted as a constant, we determine the limits of supercooling by which we explore the behavior of the critical nucleus radius which is shown to decrease in terms of the above temperatures. The radius of the, and the number of molecules per, critical nucleus are shown to assume the average values of 20.2 A˚ and 175 with standard deviations of 0.16 Å and 4.5, respectively.
Computational study of arc discharges: Spark plug and railplug ignitors
NASA Astrophysics Data System (ADS)
Ekici, Ozgur
A theoretical study of electrical arc discharges that focuses on the discharge processes in spark plug and railplug ignitors is presented. The aim of the study is to gain a better understanding of the dynamics of electrical discharges, more specifically the transfer of electrical energy into the gas and the effect of this energy transfer on the flow physics. Different levels of computational models are presented to investigate the types of arc discharges seen in spark plugs and railplugs (i.e., stationary and moving arc discharges). Better understanding of discharge physics is important for a number of applications. For example, improved fuel economy under the constraint of stricter emissions standards and improved plug durability are important objectives of current internal combustion engine designs. These goals can be achieved by improving the existing systems (spark plug) and introducing more sophisticated ignition systems (railplug). In spite of the fact spark plug and railplug ignitors are the focus of this work, the methods presented in this work can be extended to study the discharges found in other applications such as plasma torches, laser sparks, and circuit breakers. The system of equations describing the physical processes in an air plasma is solved using computational fluid dynamics codes to simulate thermal and flow fields. The evolution of the shock front, temperature, pressure, density, and flow of a plasma kernel were investigated for both stationary and moving arcs. Arc propagation between the electrodes under the effects of gas dynamics and electromagnetic processes was studied for moving arcs. The air plasma is regarded as a continuum, single substance material in local thermal equilibrium. Thermophysical properties of high temperature air are used to take into consideration the important processes such as dissociation and ionization. The different mechanisms and the relative importance of several assumptions in gas discharges and thermal plasma modeling were investigated. Considering the complex nature of the studied problem, the computational models aid in analyzing the analytical theory and serve as relatively inexpensive tools when compared to experiments in design process.
Experimental test of Landauer’s principle in single-bit operations on nanomagnetic memory bits
Hong, Jeongmin; Lambson, Brian; Dhuey, Scott; Bokor, Jeffrey
2016-01-01
Minimizing energy dissipation has emerged as the key challenge in continuing to scale the performance of digital computers. The question of whether there exists a fundamental lower limit to the energy required for digital operations is therefore of great interest. A well-known theoretical result put forward by Landauer states that any irreversible single-bit operation on a physical memory element in contact with a heat bath at a temperature T requires at least kBT ln(2) of heat be dissipated from the memory into the environment, where kB is the Boltzmann constant. We report an experimental investigation of the intrinsic energy loss of an adiabatic single-bit reset operation using nanoscale magnetic memory bits, by far the most ubiquitous digital storage technology in use today. Through sensitive, high-precision magnetometry measurements, we observed that the amount of dissipated energy in this process is consistent (within 2 SDs of experimental uncertainty) with the Landauer limit. This result reinforces the connection between “information thermodynamics” and physical systems and also provides a foundation for the development of practical information processing technologies that approach the fundamental limit of energy dissipation. The significance of the result includes insightful direction for future development of information technology. PMID:26998519
NASA Astrophysics Data System (ADS)
Cho, Y. J.; Zullah, M. A.; Faizal, M.; Choi, Y. D.; Lee, Y. H.
2012-11-01
A variety of technologies has been proposed to capture the energy from waves. Some of the more promising designs are undergoing demonstration testing at commercial scales. Due to the complexity of most offshore wave energy devices and their motion response in different sea states, physical tank tests are common practice for WEC design. Full scale tests are also necessary, but are expensive and only considered once the design has been optimized. Computational Fluid Dynamics (CFD) is now recognized as an important complement to traditional physical testing techniques in offshore engineering. Once properly calibrated and validated to the problem, CFD offers a high density of test data and results in a reasonable timescale to assist with design changes and improvements to the device. The purpose of this study is to investigate the performance of a newly developed direct drive hydro turbine (DDT), which will be built in a caisson for extraction of wave energy. Experiments and CFD analysis are conducted to clarify the turbine performance and internal flow characteristics. The results show that commercial CFD code can be applied successfully to the simulation of the wave motion in the water tank. The performance of the turbine for wave energy converter is studied continuously for a ongoing project.
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
The role of broken symmetry in solvation of a spherical cavity in classical and quantum water models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remsing, Richard C.; Baer, Marcel D.; Schenter, Gregory K.
2014-08-21
Insertion of a hard sphere cavity in liquid water breaks translational symmetry and generates an electrostatic potential difference between the region near the cavity and the bulk. Here, we clarify the physical interpretation of this potential and its calculation. We also show that the electrostatic potential in the center of small, medium, and large cavities depends very sensitively on the form of the assumed molecular interactions for dfferent classical simple point-charge models and quantum mechanical DFT-based interaction potentials, as reected in their description of donor and acceptor hydrogen bonds near the cavity. These dfferences can signifcantly affect the magnitude ofmore » the scalar electrostatic potential. We argue that the result of these studies will have direct consequences toward our understanding of the thermodynamics of ion solvation through the cavity charging process. JDW and RCR are supported by the National Science Foundation (Grants CHE0848574 and CHE1300993). CJM and GKS are supported by the U.S. Department of Energy`s (DOE) Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is operated for the Department of Energy by Battelle. MDB is grateful for the support of the Linus Pauling Distinguished Postdoctoral Fellowship Program at PNNL. We acknowledge illuminating discussions and sharing of ideas and preprints with Dr. Shawn M. Kathmann and Prof. Tom Beck. The DFT simulations used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Additional computing resources were generously allocated by PNNL's Institutional Computing program.« less
Geant4 Computing Performance Benchmarking and Monitoring
Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...
2015-12-23
Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less
Computationally efficient optimization of radiation drives
NASA Astrophysics Data System (ADS)
Zimmerman, George; Swift, Damian
2017-06-01
For many applications of pulsed radiation, the temporal pulse shape is designed to induce a desired time-history of conditions. This optimization is normally performed using multi-physics simulations of the system, adjusting the shape until the desired response is induced. These simulations may be computationally intensive, and iterative forward optimization is then expensive and slow. In principle, a simulation program could be modified to adjust the radiation drive automatically until the desired instantaneous response is achieved, but this may be impracticable in a complicated multi-physics program. However, the computational time increment is typically much shorter than the time scale of changes in the desired response, so the radiation intensity can be adjusted so that the response tends toward the desired value. This relaxed in-situ optimization method can give an adequate design for a pulse shape in a single forward simulation, giving a typical gain in computational efficiency of tens to thousands. This approach was demonstrated for the design of laser pulse shapes to induce ramp loading to high pressure in target assemblies where different components had significantly different mechanical impedance, requiring careful pulse shaping. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
XXV IUPAP Conference on Computational Physics (CCP2013): Preface
NASA Astrophysics Data System (ADS)
2014-05-01
XXV IUPAP Conference on Computational Physics (CCP2013) was held from 20-24 August 2013 at the Russian Academy of Sciences in Moscow, Russia. The annual Conferences on Computational Physics (CCP) present an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas. The CCP series aims to draw computational scientists from around the world and to stimulate interdisciplinary discussion and collaboration by putting together researchers interested in various fields of computational science. It is organized under the auspices of the International Union of Pure and Applied Physics and has been in existence since 1989. The CCP series alternates between Europe, America and Asia-Pacific. The conferences are traditionally supported by European Physical Society and American Physical Society. This year the Conference host was Landau Institute for Theoretical Physics. The Conference contained 142 presentations, and, in particular, 11 plenary talks with comprehensive reviews from airbursts to many-electron systems. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), European Physical Society (EPS), Division of Computational Physics of American Physical Society (DCOMP/APS), Russian Foundation for Basic Research, Department of Physical Sciences of Russian Academy of Sciences, RSC Group company. Further conference information and images from the conference are available in the pdf.
Proceedings of the 5. joint Russian-American computational mathematics conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
These proceedings contain a record of the talks presented and papers submitted by participants. The conference participants represented three institutions from the United States, Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and two from Russia, Russian Federal Nuclear Center--All Russian Research Institute of Experimental Physics (RFNC-VNIIEF/Arzamas-16), and Russian Federal Nuclear Center--All Russian Research Institute of Technical Physics (RFNC-VNIITF/Chelyabinsk-70). The presentations and papers cover a wide range of applications from radiation transport to materials. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.
A combustion model of vegetation burning in "Tiger" fire propagation tool
NASA Astrophysics Data System (ADS)
Giannino, F.; Ascoli, D.; Sirignano, M.; Mazzoleni, S.; Russo, L.; Rego, F.
2017-11-01
In this paper, we propose a semi-physical model for the burning of vegetation in a wildland fire. The main physical-chemical processes involved in fire spreading are modelled through a set of ordinary differential equations, which describe the combustion process as linearly related to the consumption of fuel. The water evaporation process from leaves and wood is also considered. Mass and energy balance equations are written for fuel (leaves and wood) assuming that combustion process is homogeneous in space. The model is developed with the final aim of simulating large-scale wildland fires which spread on heterogeneous landscape while keeping the computation cost very low.
Theoretical and Computational Investigation of High-Brightness Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chiping
Theoretical and computational investigations of adiabatic thermal beams have been carried out in parameter regimes relevant to the development of advanced high-brightness, high-power accelerators for high-energy physics research and for various applications such as light sources. Most accelerator applications require high-brightness beams. This is true for high-energy accelerators such as linear colliders. It is also true for energy recovery linacs (ERLs) and free electron lasers (FELs) such as x-ray free electron lasers (XFELs). The breakthroughs and highlights in our research in the period from February 1, 2013 to November 30, 2013 were: a) Completion of a preliminary theoretical and computationalmore » study of adiabatic thermal Child-Langmuir flow (Mok, 2013); and b) Presentation of an invited paper entitled ?Adiabatic Thermal Beams in a Periodic Focusing Field? at Space Charge 2013 Workshop, CERN, April 16-19, 2013 (Chen, 2013). In this report, an introductory background for the research project is provided. Basic theory of adiabatic thermal Child-Langmuir flow is reviewed. Results of simulation studies of adiabatic thermal Child-Langmuir flows are discussed.« less
NASA Astrophysics Data System (ADS)
Madanagopal, A.; Periandy, S.; Gayathri, P.; Ramalingam, S.; Xavier, S.
2017-01-01
The pharmaceutical compound; Phenacetin was investigated by analyzing FT-IR, FT-Raman and 1H &13C NMR spectra. The hybrid efficient computational calculations performed for computing physical and chemical parameters. The cause of pharmaceutical activity due to the substitutions; carboxylic, methyl and amine groups in appropriate positions on the pedestal compound was deeply investigated. Moreover, 13C NMR and 1H NMR chemical shifts correlated with TMS standard to explain the truth of compositional ratio of base and ligand groups. The bathochromic shift due to chromophores over the energy levels in UV-Visible region was strongly emphasized the Anti-inflammatory chemical properties. The chemical stability was pronounced by the strong kubo gap which showed the occurring of charge transformation within the molecule. The occurrence of the chemical reaction was feasibly interpreted by Gibbs free energy profile. The standard vibrational analysis stressed the active participation of composed ligand groups for the existence of the analgesic as well as antipyretic properties of the Phenacetin compound. The strong dipole interaction energy utilization for the transition among non-vanishing donor and acceptor for composition of the molecular structure was interpreted.
Neuromorphic Kalman filter implementation in IBM’s TrueNorth
NASA Astrophysics Data System (ADS)
Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.
2017-10-01
Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.
Design and deployment of an elastic network test-bed in IHEP data center based on SDN
NASA Astrophysics Data System (ADS)
Zeng, Shan; Qi, Fazhi; Chen, Gang
2017-10-01
High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.
NASA Astrophysics Data System (ADS)
Turinsky, Paul J.; Martin, William R.
2017-04-01
In this special issue of the Journal of Computational Physics, the research and development completed at the time of manuscript submission by the Consortium for Advanced Simulation of Light Water Reactors (CASL) is presented. CASL is the first of several Energy Innovation Hubs that have been created by the Department of Energy. The Hubs are modeled after the strong scientific management characteristics of the Manhattan Project and AT&T Bell Laboratories, and function as integrated research centers that combine basic and applied research with engineering to accelerate scientific discovery that addresses critical energy issues. Lifetime of a Hub is expected to be five or ten years depending upon performance, with CASL being granted a ten year lifetime.